我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
当前回答
这可能太数学化了,但这是我的尝试。(我是数学家。)
如果某个东西是O(f(n)),那么它在n个元素上的运行时间将等于A f(n) + B(以时钟周期或CPU操作为单位)。理解这些常量A和B是非常关键的,它们来自特定的实现。B本质上代表你的操作的“常量开销”,例如你所做的一些预处理不依赖于集合的大小。A表示实际项目处理算法的速度。
关键在于,你可以使用大O符号来计算某物的可伸缩性。所以这些常数并不重要:如果你想弄清楚如何从10个项目扩展到10000个项目,谁会关心开销常数B呢?类似地,其他问题(见下文)肯定会超过乘法常数A的重要性。
So the real deal is f(n). If f grows not at all with n, e.g. f(n) = 1, then you'll scale fantastically---your running time will always just be A + B. If f grows linearly with n, i.e. f(n) = n, your running time will scale pretty much as best as can be expected---if your users are waiting 10 ns for 10 elements, they'll wait 10000 ns for 10000 elements (ignoring the additive constant). But if it grows faster, like n2, then you're in trouble; things will start slowing down way too much when you get larger collections. f(n) = n log(n) is a good compromise, usually: your operation can't be so simple as to give linear scaling, but you've managed to cut things down such that it'll scale much better than f(n) = n2.
实际上,这里有一些很好的例子:
O(1): retrieving an element from an array. We know exactly where it is in memory, so we just go get it. It doesn't matter if the collection has 10 items or 10000; it's still at index (say) 3, so we just jump to location 3 in memory. O(n): retrieving an element from a linked list. Here, A = 0.5, because on average you''ll have to go through 1/2 of the linked list before you find the element you're looking for. O(n2): various "dumb" sorting algorithms. Because generally their strategy involves, for each element (n), you look at all the other elements (so times another n, giving n2), then position yourself in the right place. O(n log(n)): various "smart" sorting algorithms. It turns out that you only need to look at, say, 10 elements in a 1010-element collection to intelligently sort yourself relative to everyone else in the collection. Because everyone else is also going to look at 10 elements, and the emergent behavior is orchestrated just right so that this is enough to produce a sorted list. O(n!): an algorithm that "tries everything," since there are (proportional to) n! possible combinations of n elements that might solve a given problem. So it just loops through all such combinations, tries them, then stops whenever it succeeds.
其他回答
你可能会发现把它形象化很有用:
同样,在LogY/LogX尺度上,函数n1/2, n, n2都看起来像直线,而在LogY/X尺度上,2n, en, 10n是直线和n!是线性的(看起来像n log n)
log(n) means logarithmic growth. An example would be divide and conquer algorithms. If you have 1000 sorted numbers in an array ( ex. 3, 10, 34, 244, 1203 ... ) and want to search for a number in the list (find its position), you could start with checking the value of the number at index 500. If it is lower than what you seek, jump to 750. If it is higher than what you seek, jump to 250. Then you repeat the process until you find your value (and key). Every time we jump half the search space, we can cull away testing many other values since we know the number 3004 can't be above number 5000 (remember, it is a sorted list).
N log(N)表示N * log(N)
只是为了回应我上面帖子的一些评论:
Domenic - I'm on this site, and I care. Not for pedantry's sake, but because we - as programmers - typically care about precision. Using O( ) notation incorrectly in the style that some have done here renders it kind of meaningless; we may just as well say something takes n^2 units of time as O( n^2 ) under the conventions used here. Using the O( ) adds nothing. It's not just a small discrepancy between common usage and mathematical precision that I'm talking about, it's the difference between it being meaningful and it not.
我知道很多很多优秀的程序员都准确地使用这些术语。说“哦,我们是程序员,所以我们不在乎”会降低整个企业的成本。
一个接一个-嗯,不完全是,尽管我同意你的观点。对于任意大的n,它不是O(1)这是O()的定义。它只是表明O()对于有界n的适用性有限,在这里我们更愿意讨论所走的步数,而不是这个数字的界限。
有一件事由于某种原因还没有被提及:
当你看到像O(2^n)或O(n^3)这样的算法时,这通常意味着你将不得不接受一个不完美的问题答案,以获得可接受的性能。
在处理优化问题时,像这样的正确解决方案很常见。在合理的时间内给出一个近乎正确的答案,总比在机器腐烂成灰尘很久之后才给出一个正确答案要好。
以国际象棋为例:我不知道正确的解决方案是什么,但它可能是O(n^50)或更糟。从理论上讲,任何计算机都不可能真正计算出正确答案——即使你用宇宙中的每个粒子作为计算元素,在宇宙生命周期内尽可能短的时间内执行一项操作,你仍然会剩下很多零。(量子计算机能否解决这个问题是另一回事。)
好吧,这里有一些非常好的答案,但几乎所有的答案似乎都犯了同样的错误,这是一个普遍的常见用法。
非正式地,我们写f(n) = O(g(n))如果,直到一个比例因子,对于所有n大于某个n0, g(n)大于f(n)。也就是说,f(n)的增长速度并不比g(n)快,或者从上到下以g(n)为界。这并没有告诉我们f(n)增长有多快,除了它保证不会比g(n)差。
一个具体的例子:n = O(2^n)我们都知道n的增长速度比2^n慢得多,所以我们可以说它的上界是指数函数。在n和2^n之间有很大的空间,所以它不是一个很紧的边界,但它仍然是一个合理的边界。
为什么我们(计算机科学家)使用边界而不是精确?因为a)边界通常更容易证明,b)它为我们提供了一种表达算法属性的简便方法。如果我说我的新算法是O(n.log n),这意味着在最坏的情况下,它的运行时间将在n个输入上以n.log n为界,对于足够大的n(尽管请参阅下面我的评论,当我可能不是指最坏情况时)。
如果相反,我们想说一个函数的增长速度与其他函数一样快,我们用theta来说明这一点(我将T(f(n))写成markdown表示\ (f(n))。T(g(n))是上下以g(n)为界的缩写,直到一个比例因子且渐近。
这是f (n) = T (g (n)) < = > f (n) = O (g (n))和g (n) = O (f (n))。在我们的例子中,我们可以看到n != T(2^n)因为2^n != O(n)。
为什么要担心这个呢?因为在你的问题中,你写了“一个人必须吸可卡因才能写出一个O(x!)?”答案是否定的——因为基本上你写的所有东西都会以阶乘函数为界。快速排序的运行时间是O(n!) -这不是一个严格的界限。
这里还有另一个微妙的维度。通常我们用O(g(n))表示最坏情况的输入,这样我们就得到了一个复合语句:在最坏情况下运行时间不会比g(n)步的算法差,同样是模缩放,而且n足够大,但有时我们想讨论平均情况甚至最佳情况的运行时间。
香草快速排序就是一个很好的例子。在最坏的情况下是T(n²)(实际上至少需要n²步,但不会多很多),但在平均情况下是T(n.log n),也就是说期望的步数与n.log n成正比。在最好的情况下也是T(n.log n) -但你可以改进它,例如,检查数组是否已经排序在哪种情况下,最佳运行时间将是T(n)。
How does this relate to your question about the practical realisations of these bounds? Well, unfortunately, O( ) notation hides constants which real-world implementations have to deal with. So although we can say that, for example, for a T(n^2) operation we have to visit every possible pair of elements, we don't know how many times we have to visit them (except that it's not a function of n). So we could have to visit every pair 10 times, or 10^10 times, and the T(n^2) statement makes no distinction. Lower order functions are also hidden - we could have to visit every pair of elements once, and every individual element 100 times, because n^2 + 100n = T(n^2). The idea behind O( ) notation is that for large enough n, this doesn't matter at all because n^2 gets so much larger than 100n that we don't even notice the impact of 100n on the running time. However, we often deal with 'sufficiently small' n such that constant factors and so on make a real, significant difference.
例如,快速排序(平均成本T(n.log n))和堆排序(平均成本T(n.log n))都是具有相同平均成本的排序算法——但快速排序通常比堆排序快得多。这是因为堆排序比快速排序对每个元素做了更多的比较。
这并不是说O()符号是无用的,只是不精确。对于小n来说,这是一个相当钝的工具。
(作为本文的最后一个注意事项,请记住O()表示法只是描述任何函数的增长——它不一定是时间,它可以是内存、分布式系统中交换的消息或并行算法所需的cpu数量。)