我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?

比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?


其中很多都很容易用非编程的东西来演示,比如洗牌。

对一副牌进行排序通过遍历整副牌找到黑桃a,然后遍历整副牌找到黑桃2,以此类推最坏情况是n^2,如果这副牌已经倒着排序了。你看了52张牌52次。

一般来说,真正糟糕的算法不一定是故意的,它们通常是对其他东西的误用,比如在同一集合上线性重复的另一个方法中调用一个线性方法。


不,O(n)算法并不意味着它将对每个元素执行操作。大o符号给了你一种方法来谈论你的算法的“速度”独立于你的实际机器。

O(n)表示算法花费的时间随着输入的增加而线性增长。O(n²)意味着你的算法花费的时间是你输入的平方。等等。


一种思考的方式是:

O(N²)意味着对于每个元素,你都要对其他元素做一些事情,比如比较它们。冒泡排序就是一个例子。

O(N log N)意味着对于每个元素,你只需要看log N个元素。这通常是因为你知道一些元素,可以让你做出有效的选择。最有效的排序就是一个例子,比如归并排序。

O(N!)表示对N个元素的所有可能排列进行处理。旅行推销员就是一个例子,那里有N!访问节点的方法,暴力解决方案是查看每一种可能的排列的总代价,以找到最优的一个。


你可能会发现把它形象化很有用:

同样,在LogY/LogX尺度上,函数n1/2, n, n2都看起来像直线,而在LogY/X尺度上,2n, en, 10n是直线和n!是线性的(看起来像n log n)


log(n) means logarithmic growth. An example would be divide and conquer algorithms. If you have 1000 sorted numbers in an array ( ex. 3, 10, 34, 244, 1203 ... ) and want to search for a number in the list (find its position), you could start with checking the value of the number at index 500. If it is lower than what you seek, jump to 750. If it is higher than what you seek, jump to 250. Then you repeat the process until you find your value (and key). Every time we jump half the search space, we can cull away testing many other values since we know the number 3004 can't be above number 5000 (remember, it is a sorted list).

N log(N)表示N * log(N)


要理解O(n log n),请记住log n意味着log-base-2 (n)。然后看看每一部分:

O(n)是,当你对集合中的每一项进行操作时。

O(log n)是指操作的次数与取2的指数相同,以得到项目的数量。例如,二分搜索必须将集合切成log n的一半。

O(nlogn)是一个组合——你在对集合中的每一项进行二分搜索。高效的排序通常是对每个项目进行一次循环,并在每个循环中进行良好的搜索,以找到放置相关项目或组的正确位置。因此是n * log n。


big - o符号对代码的重要意义在于,当它所操作的“事物”数量增加一倍时,它将如何扩展。这里有一个具体的例子:

Big-O       |  computations for 10 things |  computations for 100 things
----------------------------------------------------------------------
O(1)        |   1                         |     1
O(log(n))   |   3                         |     7
O(n)        |  10                         |   100
O(n log(n)) |  30                         |   700
O(n^2)      | 100                         | 10000

快速排序是O(nlog (n))而冒泡排序是O(n²)当排序10个东西时,快速排序比冒泡排序快3倍。但当对100个东西进行排序时,速度要快14倍!显然,选择最快的算法很重要。当您访问具有数百万行的数据库时,这可能意味着您的查询在0.2秒内执行,而不是花费数小时。

另一件需要考虑的事情是,糟糕的算法是摩尔定律无法帮助的事情。例如,如果你有一个O(n^3)的科学计算,它一天可以计算100个东西,处理器速度翻倍一天只能计算125个东西。然而,计算到O(n²),你每天要做1000件事情。

澄清: 实际上,Big-O并没有说不同算法在同一特定大小点上的性能比较,而是说同一算法在不同大小点上的性能比较:

                 computations     computations       computations
Big-O       |   for 10 things |  for 100 things |  for 1000 things
----------------------------------------------------------------------
O(1)        |        1        |        1        |         1
O(log(n))   |        1        |        3        |         7
O(n)        |        1        |       10        |       100
O(n log(n)) |        1        |       33        |       664
O(n^2)      |        1        |      100        |     10000

这可能太数学化了,但这是我的尝试。(我是数学家。)

如果某个东西是O(f(n)),那么它在n个元素上的运行时间将等于A f(n) + B(以时钟周期或CPU操作为单位)。理解这些常量A和B是非常关键的,它们来自特定的实现。B本质上代表你的操作的“常量开销”,例如你所做的一些预处理不依赖于集合的大小。A表示实际项目处理算法的速度。

关键在于,你可以使用大O符号来计算某物的可伸缩性。所以这些常数并不重要:如果你想弄清楚如何从10个项目扩展到10000个项目,谁会关心开销常数B呢?类似地,其他问题(见下文)肯定会超过乘法常数A的重要性。

So the real deal is f(n). If f grows not at all with n, e.g. f(n) = 1, then you'll scale fantastically---your running time will always just be A + B. If f grows linearly with n, i.e. f(n) = n, your running time will scale pretty much as best as can be expected---if your users are waiting 10 ns for 10 elements, they'll wait 10000 ns for 10000 elements (ignoring the additive constant). But if it grows faster, like n2, then you're in trouble; things will start slowing down way too much when you get larger collections. f(n) = n log(n) is a good compromise, usually: your operation can't be so simple as to give linear scaling, but you've managed to cut things down such that it'll scale much better than f(n) = n2.

实际上,这里有一些很好的例子:

O(1): retrieving an element from an array. We know exactly where it is in memory, so we just go get it. It doesn't matter if the collection has 10 items or 10000; it's still at index (say) 3, so we just jump to location 3 in memory. O(n): retrieving an element from a linked list. Here, A = 0.5, because on average you''ll have to go through 1/2 of the linked list before you find the element you're looking for. O(n2): various "dumb" sorting algorithms. Because generally their strategy involves, for each element (n), you look at all the other elements (so times another n, giving n2), then position yourself in the right place. O(n log(n)): various "smart" sorting algorithms. It turns out that you only need to look at, say, 10 elements in a 1010-element collection to intelligently sort yourself relative to everyone else in the collection. Because everyone else is also going to look at 10 elements, and the emergent behavior is orchestrated just right so that this is enough to produce a sorted list. O(n!): an algorithm that "tries everything," since there are (proportional to) n! possible combinations of n elements that might solve a given problem. So it just loops through all such combinations, tries them, then stops whenever it succeeds.


我喜欢don neufeld的答案,但我想我可以加上O(nlog n)

使用简单分治策略的算法可能是O(log n)最简单的例子是在排序列表中查找某个东西。你不需要从头开始扫描。你走到中间,你决定是向后走还是向前走,跳到中途,直到你找到你要找的东西。

如果您查看快速排序或归并排序算法,您将看到它们都采用将列表分成两半,对每一半排序(使用相同的算法,递归地),然后重新组合两半的方法。这种递归分治策略是O(nlog n)

If you think about it carefully, you'll see that quicksort does an O(n) partitioning algorithm on the whole n items, then an O(n) partitioning twice on n/2 items, then 4 times on n/4 items, etc... until you get to an n partitions on 1 item (which is degenerate). The number of times you divide n in half to get to 1 is approximately log n, and each step is O(n), so recursive divide and conquer is O(n log n). Mergesort builds the other way, starting with n recombinations of 1 item, and finishing with 1 recombination of n items, where the recombination of two sorted lists is O(n).

至于抽大麻写一个O(n!)算法,除非你别无选择。上面提到的旅行推销员问题被认为是这样一个问题。


大多数Jon Bentley的书(例如Programming Pearls)都以一种非常实用的方式涵盖了这些内容。他的这次演讲中就包括了一个这样的快排分析。

虽然与这个问题并不完全相关,但Knuth提出了一个有趣的想法:在高中微积分课上教授Big-O符号,尽管我觉得这个想法相当古怪。


好吧,这里有一些非常好的答案,但几乎所有的答案似乎都犯了同样的错误,这是一个普遍的常见用法。

非正式地,我们写f(n) = O(g(n))如果,直到一个比例因子,对于所有n大于某个n0, g(n)大于f(n)。也就是说,f(n)的增长速度并不比g(n)快,或者从上到下以g(n)为界。这并没有告诉我们f(n)增长有多快,除了它保证不会比g(n)差。

一个具体的例子:n = O(2^n)我们都知道n的增长速度比2^n慢得多,所以我们可以说它的上界是指数函数。在n和2^n之间有很大的空间,所以它不是一个很紧的边界,但它仍然是一个合理的边界。

为什么我们(计算机科学家)使用边界而不是精确?因为a)边界通常更容易证明,b)它为我们提供了一种表达算法属性的简便方法。如果我说我的新算法是O(n.log n),这意味着在最坏的情况下,它的运行时间将在n个输入上以n.log n为界,对于足够大的n(尽管请参阅下面我的评论,当我可能不是指最坏情况时)。

如果相反,我们想说一个函数的增长速度与其他函数一样快,我们用theta来说明这一点(我将T(f(n))写成markdown表示\ (f(n))。T(g(n))是上下以g(n)为界的缩写,直到一个比例因子且渐近。

这是f (n) = T (g (n)) < = > f (n) = O (g (n))和g (n) = O (f (n))。在我们的例子中,我们可以看到n != T(2^n)因为2^n != O(n)。

为什么要担心这个呢?因为在你的问题中,你写了“一个人必须吸可卡因才能写出一个O(x!)?”答案是否定的——因为基本上你写的所有东西都会以阶乘函数为界。快速排序的运行时间是O(n!) -这不是一个严格的界限。

这里还有另一个微妙的维度。通常我们用O(g(n))表示最坏情况的输入,这样我们就得到了一个复合语句:在最坏情况下运行时间不会比g(n)步的算法差,同样是模缩放,而且n足够大,但有时我们想讨论平均情况甚至最佳情况的运行时间。

香草快速排序就是一个很好的例子。在最坏的情况下是T(n²)(实际上至少需要n²步,但不会多很多),但在平均情况下是T(n.log n),也就是说期望的步数与n.log n成正比。在最好的情况下也是T(n.log n) -但你可以改进它,例如,检查数组是否已经排序在哪种情况下,最佳运行时间将是T(n)。

How does this relate to your question about the practical realisations of these bounds? Well, unfortunately, O( ) notation hides constants which real-world implementations have to deal with. So although we can say that, for example, for a T(n^2) operation we have to visit every possible pair of elements, we don't know how many times we have to visit them (except that it's not a function of n). So we could have to visit every pair 10 times, or 10^10 times, and the T(n^2) statement makes no distinction. Lower order functions are also hidden - we could have to visit every pair of elements once, and every individual element 100 times, because n^2 + 100n = T(n^2). The idea behind O( ) notation is that for large enough n, this doesn't matter at all because n^2 gets so much larger than 100n that we don't even notice the impact of 100n on the running time. However, we often deal with 'sufficiently small' n such that constant factors and so on make a real, significant difference.

例如,快速排序(平均成本T(n.log n))和堆排序(平均成本T(n.log n))都是具有相同平均成本的排序算法——但快速排序通常比堆排序快得多。这是因为堆排序比快速排序对每个元素做了更多的比较。

这并不是说O()符号是无用的,只是不精确。对于小n来说,这是一个相当钝的工具。

(作为本文的最后一个注意事项,请记住O()表示法只是描述任何函数的增长——它不一定是时间,它可以是内存、分布式系统中交换的消息或并行算法所需的cpu数量。)


只是为了回应我上面帖子的一些评论:

Domenic - I'm on this site, and I care. Not for pedantry's sake, but because we - as programmers - typically care about precision. Using O( ) notation incorrectly in the style that some have done here renders it kind of meaningless; we may just as well say something takes n^2 units of time as O( n^2 ) under the conventions used here. Using the O( ) adds nothing. It's not just a small discrepancy between common usage and mathematical precision that I'm talking about, it's the difference between it being meaningful and it not.

我知道很多很多优秀的程序员都准确地使用这些术语。说“哦,我们是程序员,所以我们不在乎”会降低整个企业的成本。

一个接一个-嗯,不完全是,尽管我同意你的观点。对于任意大的n,它不是O(1)这是O()的定义。它只是表明O()对于有界n的适用性有限,在这里我们更愿意讨论所走的步数,而不是这个数字的界限。


堂。neufeld的答案非常好,但我可能会分两部分解释它:首先,大多数算法都属于O()的粗略层次结构。然后,你可以看看每一种算法,得出那种时间复杂度的典型算法是怎么做的。

出于实际目的,似乎唯一重要的O()是:

O(1) "constant time" - the time required is independent of the size of the input. As a rough category, I would include algorithms such as hash lookups and Union-Find here, even though neither of those are actually O(1). O(log(n)) "logarithmic" - it gets slower as you get larger inputs, but once your input gets fairly large, it won't change enough to worry about. If your runtime is ok with reasonably-sized data, you can swamp it with as much additional data as you want and it'll still be ok. O(n) "linear" - the more input, the longer it takes, in an even tradeoff. Three times the input size will take roughly three times as long. O(n log(n)) "better than quadratic" - increasing the input size hurts, but it's still manageable. The algorithm is probably decent, it's just that the underlying problem is more difficult (decisions are less localized with respect to the input data) than those problems that can be solved in linear time. If your input sizes are getting up there, don't assume that you could necessarily handle twice the size without changing your architecture around (eg by moving things to overnight batch computations, or not doing things per-frame). It's ok if the input size increases a little bit, though; just watch out for multiples. O(n^2) "quadratic" - it's really only going to work up to a certain size of your input, so pay attention to how big it could get. Also, your algorithm may suck -- think hard to see if there's an O(n log(n)) algorithm that would give you what you need. Once you're here, feel very grateful for the amazing hardware we've been gifted with. Not long ago, what you are trying to do would have been impossible for all practical purposes. O(n^3) "cubic" - not qualitatively all that different from O(n^2). The same comments apply, only more so. There's a decent chance that a more clever algorithm could shave this time down to something smaller, eg O(n^2 log(n)) or O(n^2.8...), but then again, there's a good chance that it won't be worth the trouble. (You're already limited in your practical input size, so the constant factors that may be required for the more clever algorithms will probably swamp their advantages for practical cases. Also, thinking is slow; letting the computer chew on it may save you time overall.) O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot. These problems have a recognizable flavor to them. Your input sizes are capped at a fairly specific hard limit. You'll know quickly whether you fit into that limit.

就是这样。还有很多其他的可能性在这些之间(或大于O(2^n)),但它们在实践中不经常发生,它们与这些中的任何一个在性质上没有太大的不同。三次算法已经有点牵强了;我之所以把它们包括进来,是因为我经常遇到它们,值得一提(例如矩阵乘法)。

这类算法到底发生了什么?我认为你有一个很好的开始,尽管有很多例子不符合这些特征。但对于上述情况,我认为通常是这样的:

O(1) - you're only looking at most at a fixed-size chunk of your input data, and possibly none of it. Example: the maximum of a sorted list. Or your input size is bounded. Example: addition of two numbers. (Note that addition of N numbers is linear time.) O(log n) - each element of your input tells you enough to ignore a large fraction of the rest of the input. Example: when you look at an array element in binary search, its value tells you that you can ignore "half" of your array without looking at any of it. Or similarly, the element you look at gives you enough of a summary of a fraction of the remaining input that you won't need to look at it. There's nothing special about halves, though -- if you can only ignore 10% of your input at each step, it's still logarithmic. O(n) - you do some fixed amount of work per input element. (But see below.) O(n log(n)) - there are a few variants. You can divide the input into two piles (in no more than linear time), solve the problem independently on each pile, and then combine the two piles to form the final solution. The independence of the two piles is key. Example: classic recursive mergesort. Each linear-time pass over the data gets you halfway to your solution. Example: quicksort if you think in terms of the maximum distance of each element to its final sorted position at each partitioning step (and yes, I know that it's actually O(n^2) because of degenerate pivot choices. But practically speaking, it falls into my O(n log(n)) category.) O(n^2) - you have to look at every pair of input elements. Or you don't, but you think you do, and you're using the wrong algorithm. O(n^3) - um... I don't have a snappy characterization of these. It's probably one of: You're multiplying matrices You're looking at every pair of inputs but the operation you do requires looking at all of the inputs again the entire graph structure of your input is relevant O(2^n) - you need to consider every possible subset of your inputs.

这些都不严谨。尤其是线性时间算法(O(n)):我可以举出很多例子,你必须看所有的输入,然后是一半,然后是一半,等等。或者反过来——将输入对折叠在一起,然后对输出进行递归。这些不符合上面的描述,因为你不是只看一次每个输入,但它仍然是线性时间。不过,在99.2%的情况下,线性时间意味着只查看一次每个输入。


我是这样想的,你有一个任务,要清理一个由坏人V引起的问题,他选择了N,你必须估计出当他增加N时,你需要多长时间来完成你的问题。

O(1) ->增加N并没有什么不同

O(log(N)) ->每次V翻倍N,你必须花费额外的时间T来完成任务。V又翻倍了N,你花了同样多的钱。

O(N) -> V N每翻一倍,花费的时间就翻一倍。

O(N²)- V N每翻一倍,花费的时间就增加4倍。(这不公平!!)

O(nlog (N)) -, V每翻一倍N,你就花两倍的时间,再多一点。

这些是算法的边界;计算机科学家想要描述大n值需要多长时间(当你分解密码学中使用的数字时,这很重要——如果计算机速度提高了10倍,你需要多使用多少位才能确保它们仍然需要100年而不是1年才能破解你的加密?)

有些界限可能有奇怪的表达式,如果它对涉及的人有影响的话。我在Knuth的《计算机编程艺术》中见过类似于O(nlog (N) log(log(N))的算法。(我一时想不起是哪一个了)


有一件事由于某种原因还没有被提及:

当你看到像O(2^n)或O(n^3)这样的算法时,这通常意味着你将不得不接受一个不完美的问题答案,以获得可接受的性能。

在处理优化问题时,像这样的正确解决方案很常见。在合理的时间内给出一个近乎正确的答案,总比在机器腐烂成灰尘很久之后才给出一个正确答案要好。

以国际象棋为例:我不知道正确的解决方案是什么,但它可能是O(n^50)或更糟。从理论上讲,任何计算机都不可能真正计算出正确答案——即使你用宇宙中的每个粒子作为计算元素,在宇宙生命周期内尽可能短的时间内执行一项操作,你仍然会剩下很多零。(量子计算机能否解决这个问题是另一回事。)


把它想象成垂直堆叠乐高积木(n),然后跳过它们。

O(1)表示在每一步,你什么都不做。高度保持不变。

O(n)表示在每一步,你堆叠c块,其中c1是常数。

O(n²)表示在每一步,你堆叠c2 x n个块,其中c2是一个常数,n是堆叠块的数量。

O(nlogn)表示在每一步,你堆叠c3 x n x logn个块,其中c3是一个常数,n是堆叠块的数量。


告诉你8年前的log(n)意味着你必须把一个长度为nlog的东西切成两半的次数,让它变成大小为n=1:p

O(nlogn)通常是排序 O(n²)通常是比较所有元素对


有人必须吸可卡因才能写出O(x!)吗?

不用,用Prolog就行。如果您在Prolog中编写排序算法,只需描述每个元素都应该比前一个元素大,并让回溯为您进行排序,那么它将是O(x!)也称为“排列排序”。


Big-O背后的“直觉

想象一下,当x趋于无穷时,x上的两个函数f(x)和g(x)之间的“竞争”。

现在,如果从某一点开始(某个x点),一个函数的值总是比另一个高,那么我们称这个函数比另一个“快”。

例如,对于每x > 100,你看到f(x) > g(x),那么f(x)比g(x)“快”。

在这种情况下,我们可以说g(x) = O(f(x))F (x)对g(x)提出了某种“速度限制”,因为最终它超过了它,并将其永远甩在后面。

这并不完全是大o符号的定义,它还指出,对于某个常数C, f(x)只需要大于C*g(x)(这只是另一种说法,你不能通过将g(x)乘以常数因子来帮助g(x)赢得竞争- f(x)最终总是会赢)。正式的定义也使用绝对值。但我希望我能让它更直观。


假设你有一台可以解决一定规模问题的计算机。现在想象一下,我们可以将性能提高几倍。每加倍一次,我们能解决多大的问题?

如果我们能解决一个两倍大的问题,那就是O(n)

如果我们有一个非1的乘数,那就是某种多项式复杂度。例如,如果每加倍一次,问题的规模就会增加约40%,即O(n²),而约30%则是O(n³)。

如果我们只是增加问题的规模,它是指数级的,甚至更糟。例如,如果每翻一倍意味着我们可以解决一个大1的问题,它就是O(2^n)。(这就是为什么使用合理大小的密钥实际上不可能强制使用密码密钥:128位密钥需要的处理量大约是64位密钥的16万亿倍。)


我是这样向我那些不懂技术的朋友描述的:

考虑多位数加法。很好的老式铅笔和纸的补充。就是你7-8岁时学的那种。给定两个三位数或四位数,你很容易就能求出它们加起来是多少。

如果我给你两个100位的数字,然后问你它们加起来是多少,即使你必须使用铅笔和纸,计算出来也会非常简单。一个聪明的孩子可以在几分钟内做这样的加法。这只需要大约100次操作。

现在,考虑多位数乘法。你可能在八九岁的时候就学会了。你(希望)做了很多重复的练习来学习它背后的机制。

Now, imagine I gave you those same two 100-digit numbers and told you to multiply them together. This would be a much, much harder task, something that would take you hours to do - and that you'd be unlikely to do without mistakes. The reason for this is that (this version of) multiplication is O(n^2); each digit in the bottom number has to be multiplied by each digit in the top number, leaving a total of about n^2 operations. In the case of the 100-digit numbers, that's 10,000 multiplications.


还记得乌龟和兔子的寓言吗?

从长远来看,乌龟赢了,但从短期来看,兔子赢了。

这就像O(logN)(乌龟)vs O(N)(野兔)。

如果两种方法的大o值不同,那么它们中的一种会在N的水平上获胜,但大o值没有说明N的大小。


我试图用c#和JavaScript给出简单的代码示例来解释。

C#

For List<int> numbers = new List<int> {1,2,3,4,5,6,7,12,543,7};

O(1)看起来像

return numbers.First();

O(n)看起来像

int result = 0;
foreach (int num in numbers)
{
  result += num;
}
return result;

O(nlog (n))是这样的

int result = 0;
foreach (int num in numbers)
{
    int index = numbers.Count - 1;
    while (index > 1)
    {
        // yeah, stupid, but couldn't come up with something more useful :-(
        result += numbers[index];
        index /= 2;
    }
}
return result;

O(n2)是这样的

int result = 0;
foreach (int outerNum in numbers)
{
    foreach (int innerNum in numbers)
    {
        result += outerNum * innerNum;
    }
}
return result;

O(n!)看起来,嗯,太累了,想不出任何简单的东西。 但我希望你能明白大意?


JavaScript

对于const数= [1,2,3,4,5,6,7,12,543,7];

O(1)看起来像

numbers[0];

O(n)看起来像

let result = 0;
for (num of numbers){
    result += num;
}

O(nlog (n))是这样的

let result = 0;
for (num of numbers){

    let index = numbers.length - 1;
    while (index > 1){
        // yeah, stupid, but couldn't come up with something more useful :-(
        result += numbers[index];
        index = Math.floor(index/2)
    }
}

O(n2)是这样的

let result = 0;
for (outerNum of numbers){
    for (innerNum of numbers){
        result += outerNum * innerNum;
    }
}

为了对被问到的问题保持真诚,我会用回答8岁孩子的方式来回答这个问题

假设一个冰淇淋小贩准备了许多不同形状的冰淇淋(比如N个),按顺序排列。 你想吃中间的冰淇淋

情况1:只有吃完所有比它小的冰淇淋,你才能吃冰淇淋 你将不得不吃掉一半准备好的冰淇淋(输入)。答案直接取决于输入的大小 解是o(N)阶的

情况2:—你可以直接吃中间的冰淇淋

解是O(1)

情况3:只有当你吃完所有比它小的冰淇淋时,你才能吃冰淇淋,每次你吃冰淇淋时,你都允许另一个孩子(每次都是新孩子)吃掉他所有的冰淇淋 总时间为N + N + N.......(N/2)次 溶液是O(N2)


我会试着为一个真正的八岁男孩写一个解释,除了专业术语和数学概念。

比如O(n²)的运算会怎样?

如果你在一个聚会上,包括你在内有n个人。需要多少次握手才能让每个人都和其他人握手,因为人们可能会在某个时候忘记他们握手的人是谁。

注意:这近似于产生n(n-1)的单形,这足够接近于n²。

如果一个操作是O(nlog (n))这是什么意思?

你最喜欢的球队赢了,他们站在队伍里,队伍里有n名球员。你需要和每个玩家握手多少次,假设你要和每个玩家握手多次,多少次,玩家的号码n中有多少位数字。

注意:这将产生n * log n的10次方。

有人必须吸可卡因才能写出O(x!)吗?

你是一个富二代,你的衣柜里有很多衣服,每种衣服有x个抽屉,抽屉一个挨着一个,第一个抽屉里有一件衣服,每个抽屉里有和左边抽屉一样多的衣服,所以你有一顶帽子,两顶假发,…(x-1)条裤子,然后是x件衬衫。现在,用每个抽屉里的一件物品,你能装扮出多少种风格呢?

注意:这个例子表示一个决策树中有多少个叶结点,其中子结点数=深度,通过1 * 2 * 3 *完成。* x