我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?

比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?


当前回答

我会试着为一个真正的八岁男孩写一个解释,除了专业术语和数学概念。

比如O(n²)的运算会怎样?

如果你在一个聚会上,包括你在内有n个人。需要多少次握手才能让每个人都和其他人握手,因为人们可能会在某个时候忘记他们握手的人是谁。

注意:这近似于产生n(n-1)的单形,这足够接近于n²。

如果一个操作是O(nlog (n))这是什么意思?

你最喜欢的球队赢了,他们站在队伍里,队伍里有n名球员。你需要和每个玩家握手多少次,假设你要和每个玩家握手多次,多少次,玩家的号码n中有多少位数字。

注意:这将产生n * log n的10次方。

有人必须吸可卡因才能写出O(x!)吗?

你是一个富二代,你的衣柜里有很多衣服,每种衣服有x个抽屉,抽屉一个挨着一个,第一个抽屉里有一件衣服,每个抽屉里有和左边抽屉一样多的衣服,所以你有一顶帽子,两顶假发,…(x-1)条裤子,然后是x件衬衫。现在,用每个抽屉里的一件物品,你能装扮出多少种风格呢?

注意:这个例子表示一个决策树中有多少个叶结点,其中子结点数=深度,通过1 * 2 * 3 *完成。* x

其他回答

其中很多都很容易用非编程的东西来演示,比如洗牌。

对一副牌进行排序通过遍历整副牌找到黑桃a,然后遍历整副牌找到黑桃2,以此类推最坏情况是n^2,如果这副牌已经倒着排序了。你看了52张牌52次。

一般来说,真正糟糕的算法不一定是故意的,它们通常是对其他东西的误用,比如在同一集合上线性重复的另一个方法中调用一个线性方法。

只是为了回应我上面帖子的一些评论:

Domenic - I'm on this site, and I care. Not for pedantry's sake, but because we - as programmers - typically care about precision. Using O( ) notation incorrectly in the style that some have done here renders it kind of meaningless; we may just as well say something takes n^2 units of time as O( n^2 ) under the conventions used here. Using the O( ) adds nothing. It's not just a small discrepancy between common usage and mathematical precision that I'm talking about, it's the difference between it being meaningful and it not.

我知道很多很多优秀的程序员都准确地使用这些术语。说“哦,我们是程序员,所以我们不在乎”会降低整个企业的成本。

一个接一个-嗯,不完全是,尽管我同意你的观点。对于任意大的n,它不是O(1)这是O()的定义。它只是表明O()对于有界n的适用性有限,在这里我们更愿意讨论所走的步数,而不是这个数字的界限。

不,O(n)算法并不意味着它将对每个元素执行操作。大o符号给了你一种方法来谈论你的算法的“速度”独立于你的实际机器。

O(n)表示算法花费的时间随着输入的增加而线性增长。O(n²)意味着你的算法花费的时间是你输入的平方。等等。

堂。neufeld的答案非常好,但我可能会分两部分解释它:首先,大多数算法都属于O()的粗略层次结构。然后,你可以看看每一种算法,得出那种时间复杂度的典型算法是怎么做的。

出于实际目的,似乎唯一重要的O()是:

O(1) "constant time" - the time required is independent of the size of the input. As a rough category, I would include algorithms such as hash lookups and Union-Find here, even though neither of those are actually O(1). O(log(n)) "logarithmic" - it gets slower as you get larger inputs, but once your input gets fairly large, it won't change enough to worry about. If your runtime is ok with reasonably-sized data, you can swamp it with as much additional data as you want and it'll still be ok. O(n) "linear" - the more input, the longer it takes, in an even tradeoff. Three times the input size will take roughly three times as long. O(n log(n)) "better than quadratic" - increasing the input size hurts, but it's still manageable. The algorithm is probably decent, it's just that the underlying problem is more difficult (decisions are less localized with respect to the input data) than those problems that can be solved in linear time. If your input sizes are getting up there, don't assume that you could necessarily handle twice the size without changing your architecture around (eg by moving things to overnight batch computations, or not doing things per-frame). It's ok if the input size increases a little bit, though; just watch out for multiples. O(n^2) "quadratic" - it's really only going to work up to a certain size of your input, so pay attention to how big it could get. Also, your algorithm may suck -- think hard to see if there's an O(n log(n)) algorithm that would give you what you need. Once you're here, feel very grateful for the amazing hardware we've been gifted with. Not long ago, what you are trying to do would have been impossible for all practical purposes. O(n^3) "cubic" - not qualitatively all that different from O(n^2). The same comments apply, only more so. There's a decent chance that a more clever algorithm could shave this time down to something smaller, eg O(n^2 log(n)) or O(n^2.8...), but then again, there's a good chance that it won't be worth the trouble. (You're already limited in your practical input size, so the constant factors that may be required for the more clever algorithms will probably swamp their advantages for practical cases. Also, thinking is slow; letting the computer chew on it may save you time overall.) O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot. These problems have a recognizable flavor to them. Your input sizes are capped at a fairly specific hard limit. You'll know quickly whether you fit into that limit.

就是这样。还有很多其他的可能性在这些之间(或大于O(2^n)),但它们在实践中不经常发生,它们与这些中的任何一个在性质上没有太大的不同。三次算法已经有点牵强了;我之所以把它们包括进来,是因为我经常遇到它们,值得一提(例如矩阵乘法)。

这类算法到底发生了什么?我认为你有一个很好的开始,尽管有很多例子不符合这些特征。但对于上述情况,我认为通常是这样的:

O(1) - you're only looking at most at a fixed-size chunk of your input data, and possibly none of it. Example: the maximum of a sorted list. Or your input size is bounded. Example: addition of two numbers. (Note that addition of N numbers is linear time.) O(log n) - each element of your input tells you enough to ignore a large fraction of the rest of the input. Example: when you look at an array element in binary search, its value tells you that you can ignore "half" of your array without looking at any of it. Or similarly, the element you look at gives you enough of a summary of a fraction of the remaining input that you won't need to look at it. There's nothing special about halves, though -- if you can only ignore 10% of your input at each step, it's still logarithmic. O(n) - you do some fixed amount of work per input element. (But see below.) O(n log(n)) - there are a few variants. You can divide the input into two piles (in no more than linear time), solve the problem independently on each pile, and then combine the two piles to form the final solution. The independence of the two piles is key. Example: classic recursive mergesort. Each linear-time pass over the data gets you halfway to your solution. Example: quicksort if you think in terms of the maximum distance of each element to its final sorted position at each partitioning step (and yes, I know that it's actually O(n^2) because of degenerate pivot choices. But practically speaking, it falls into my O(n log(n)) category.) O(n^2) - you have to look at every pair of input elements. Or you don't, but you think you do, and you're using the wrong algorithm. O(n^3) - um... I don't have a snappy characterization of these. It's probably one of: You're multiplying matrices You're looking at every pair of inputs but the operation you do requires looking at all of the inputs again the entire graph structure of your input is relevant O(2^n) - you need to consider every possible subset of your inputs.

这些都不严谨。尤其是线性时间算法(O(n)):我可以举出很多例子,你必须看所有的输入,然后是一半,然后是一半,等等。或者反过来——将输入对折叠在一起,然后对输出进行递归。这些不符合上面的描述,因为你不是只看一次每个输入,但它仍然是线性时间。不过,在99.2%的情况下,线性时间意味着只查看一次每个输入。

一种思考的方式是:

O(N²)意味着对于每个元素,你都要对其他元素做一些事情,比如比较它们。冒泡排序就是一个例子。

O(N log N)意味着对于每个元素,你只需要看log N个元素。这通常是因为你知道一些元素,可以让你做出有效的选择。最有效的排序就是一个例子,比如归并排序。

O(N!)表示对N个元素的所有可能排列进行处理。旅行推销员就是一个例子,那里有N!访问节点的方法,暴力解决方案是查看每一种可能的排列的总代价,以找到最优的一个。