我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
当前回答
一种思考的方式是:
O(N²)意味着对于每个元素,你都要对其他元素做一些事情,比如比较它们。冒泡排序就是一个例子。
O(N log N)意味着对于每个元素,你只需要看log N个元素。这通常是因为你知道一些元素,可以让你做出有效的选择。最有效的排序就是一个例子,比如归并排序。
O(N!)表示对N个元素的所有可能排列进行处理。旅行推销员就是一个例子,那里有N!访问节点的方法,暴力解决方案是查看每一种可能的排列的总代价,以找到最优的一个。
其他回答
堂。neufeld的答案非常好,但我可能会分两部分解释它:首先,大多数算法都属于O()的粗略层次结构。然后,你可以看看每一种算法,得出那种时间复杂度的典型算法是怎么做的。
出于实际目的,似乎唯一重要的O()是:
O(1) "constant time" - the time required is independent of the size of the input. As a rough category, I would include algorithms such as hash lookups and Union-Find here, even though neither of those are actually O(1). O(log(n)) "logarithmic" - it gets slower as you get larger inputs, but once your input gets fairly large, it won't change enough to worry about. If your runtime is ok with reasonably-sized data, you can swamp it with as much additional data as you want and it'll still be ok. O(n) "linear" - the more input, the longer it takes, in an even tradeoff. Three times the input size will take roughly three times as long. O(n log(n)) "better than quadratic" - increasing the input size hurts, but it's still manageable. The algorithm is probably decent, it's just that the underlying problem is more difficult (decisions are less localized with respect to the input data) than those problems that can be solved in linear time. If your input sizes are getting up there, don't assume that you could necessarily handle twice the size without changing your architecture around (eg by moving things to overnight batch computations, or not doing things per-frame). It's ok if the input size increases a little bit, though; just watch out for multiples. O(n^2) "quadratic" - it's really only going to work up to a certain size of your input, so pay attention to how big it could get. Also, your algorithm may suck -- think hard to see if there's an O(n log(n)) algorithm that would give you what you need. Once you're here, feel very grateful for the amazing hardware we've been gifted with. Not long ago, what you are trying to do would have been impossible for all practical purposes. O(n^3) "cubic" - not qualitatively all that different from O(n^2). The same comments apply, only more so. There's a decent chance that a more clever algorithm could shave this time down to something smaller, eg O(n^2 log(n)) or O(n^2.8...), but then again, there's a good chance that it won't be worth the trouble. (You're already limited in your practical input size, so the constant factors that may be required for the more clever algorithms will probably swamp their advantages for practical cases. Also, thinking is slow; letting the computer chew on it may save you time overall.) O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot. These problems have a recognizable flavor to them. Your input sizes are capped at a fairly specific hard limit. You'll know quickly whether you fit into that limit.
就是这样。还有很多其他的可能性在这些之间(或大于O(2^n)),但它们在实践中不经常发生,它们与这些中的任何一个在性质上没有太大的不同。三次算法已经有点牵强了;我之所以把它们包括进来,是因为我经常遇到它们,值得一提(例如矩阵乘法)。
这类算法到底发生了什么?我认为你有一个很好的开始,尽管有很多例子不符合这些特征。但对于上述情况,我认为通常是这样的:
O(1) - you're only looking at most at a fixed-size chunk of your input data, and possibly none of it. Example: the maximum of a sorted list. Or your input size is bounded. Example: addition of two numbers. (Note that addition of N numbers is linear time.) O(log n) - each element of your input tells you enough to ignore a large fraction of the rest of the input. Example: when you look at an array element in binary search, its value tells you that you can ignore "half" of your array without looking at any of it. Or similarly, the element you look at gives you enough of a summary of a fraction of the remaining input that you won't need to look at it. There's nothing special about halves, though -- if you can only ignore 10% of your input at each step, it's still logarithmic. O(n) - you do some fixed amount of work per input element. (But see below.) O(n log(n)) - there are a few variants. You can divide the input into two piles (in no more than linear time), solve the problem independently on each pile, and then combine the two piles to form the final solution. The independence of the two piles is key. Example: classic recursive mergesort. Each linear-time pass over the data gets you halfway to your solution. Example: quicksort if you think in terms of the maximum distance of each element to its final sorted position at each partitioning step (and yes, I know that it's actually O(n^2) because of degenerate pivot choices. But practically speaking, it falls into my O(n log(n)) category.) O(n^2) - you have to look at every pair of input elements. Or you don't, but you think you do, and you're using the wrong algorithm. O(n^3) - um... I don't have a snappy characterization of these. It's probably one of: You're multiplying matrices You're looking at every pair of inputs but the operation you do requires looking at all of the inputs again the entire graph structure of your input is relevant O(2^n) - you need to consider every possible subset of your inputs.
这些都不严谨。尤其是线性时间算法(O(n)):我可以举出很多例子,你必须看所有的输入,然后是一半,然后是一半,等等。或者反过来——将输入对折叠在一起,然后对输出进行递归。这些不符合上面的描述,因为你不是只看一次每个输入,但它仍然是线性时间。不过,在99.2%的情况下,线性时间意味着只查看一次每个输入。
log(n) means logarithmic growth. An example would be divide and conquer algorithms. If you have 1000 sorted numbers in an array ( ex. 3, 10, 34, 244, 1203 ... ) and want to search for a number in the list (find its position), you could start with checking the value of the number at index 500. If it is lower than what you seek, jump to 750. If it is higher than what you seek, jump to 250. Then you repeat the process until you find your value (and key). Every time we jump half the search space, we can cull away testing many other values since we know the number 3004 can't be above number 5000 (remember, it is a sorted list).
N log(N)表示N * log(N)
不,O(n)算法并不意味着它将对每个元素执行操作。大o符号给了你一种方法来谈论你的算法的“速度”独立于你的实际机器。
O(n)表示算法花费的时间随着输入的增加而线性增长。O(n²)意味着你的算法花费的时间是你输入的平方。等等。
Big-O背后的“直觉
想象一下,当x趋于无穷时,x上的两个函数f(x)和g(x)之间的“竞争”。
现在,如果从某一点开始(某个x点),一个函数的值总是比另一个高,那么我们称这个函数比另一个“快”。
例如,对于每x > 100,你看到f(x) > g(x),那么f(x)比g(x)“快”。
在这种情况下,我们可以说g(x) = O(f(x))F (x)对g(x)提出了某种“速度限制”,因为最终它超过了它,并将其永远甩在后面。
这并不完全是大o符号的定义,它还指出,对于某个常数C, f(x)只需要大于C*g(x)(这只是另一种说法,你不能通过将g(x)乘以常数因子来帮助g(x)赢得竞争- f(x)最终总是会赢)。正式的定义也使用绝对值。但我希望我能让它更直观。
big - o符号对代码的重要意义在于,当它所操作的“事物”数量增加一倍时,它将如何扩展。这里有一个具体的例子:
Big-O | computations for 10 things | computations for 100 things ---------------------------------------------------------------------- O(1) | 1 | 1 O(log(n)) | 3 | 7 O(n) | 10 | 100 O(n log(n)) | 30 | 700 O(n^2) | 100 | 10000
快速排序是O(nlog (n))而冒泡排序是O(n²)当排序10个东西时,快速排序比冒泡排序快3倍。但当对100个东西进行排序时,速度要快14倍!显然,选择最快的算法很重要。当您访问具有数百万行的数据库时,这可能意味着您的查询在0.2秒内执行,而不是花费数小时。
另一件需要考虑的事情是,糟糕的算法是摩尔定律无法帮助的事情。例如,如果你有一个O(n^3)的科学计算,它一天可以计算100个东西,处理器速度翻倍一天只能计算125个东西。然而,计算到O(n²),你每天要做1000件事情。
澄清: 实际上,Big-O并没有说不同算法在同一特定大小点上的性能比较,而是说同一算法在不同大小点上的性能比较:
computations computations computations Big-O | for 10 things | for 100 things | for 1000 things ---------------------------------------------------------------------- O(1) | 1 | 1 | 1 O(log(n)) | 1 | 3 | 7 O(n) | 1 | 10 | 100 O(n log(n)) | 1 | 33 | 664 O(n^2) | 1 | 100 | 10000