我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
我在问更多关于这对我的代码意味着什么。我能从数学上理解这些概念,只是很难理解它们在概念上的含义。例如,如果有人要对一个数据结构执行O(1)操作,我知道它必须执行的操作数量不会增长,因为有更多的项。O(n)操作意味着您将对每个元素执行一组操作。有人能帮我填一下吗?
比如O(n²)的运算会怎样? 如果一个操作是O(nlog (n))这是什么意思? 有人必须吸可卡因才能写出O(x!)吗?
当前回答
为了对被问到的问题保持真诚,我会用回答8岁孩子的方式来回答这个问题
假设一个冰淇淋小贩准备了许多不同形状的冰淇淋(比如N个),按顺序排列。 你想吃中间的冰淇淋
情况1:只有吃完所有比它小的冰淇淋,你才能吃冰淇淋 你将不得不吃掉一半准备好的冰淇淋(输入)。答案直接取决于输入的大小 解是o(N)阶的
情况2:—你可以直接吃中间的冰淇淋
解是O(1)
情况3:只有当你吃完所有比它小的冰淇淋时,你才能吃冰淇淋,每次你吃冰淇淋时,你都允许另一个孩子(每次都是新孩子)吃掉他所有的冰淇淋 总时间为N + N + N.......(N/2)次 溶液是O(N2)
其他回答
我试图用c#和JavaScript给出简单的代码示例来解释。
C#
For List<int> numbers = new List<int> {1,2,3,4,5,6,7,12,543,7};
O(1)看起来像
return numbers.First();
O(n)看起来像
int result = 0;
foreach (int num in numbers)
{
result += num;
}
return result;
O(nlog (n))是这样的
int result = 0;
foreach (int num in numbers)
{
int index = numbers.Count - 1;
while (index > 1)
{
// yeah, stupid, but couldn't come up with something more useful :-(
result += numbers[index];
index /= 2;
}
}
return result;
O(n2)是这样的
int result = 0;
foreach (int outerNum in numbers)
{
foreach (int innerNum in numbers)
{
result += outerNum * innerNum;
}
}
return result;
O(n!)看起来,嗯,太累了,想不出任何简单的东西。 但我希望你能明白大意?
JavaScript
对于const数= [1,2,3,4,5,6,7,12,543,7];
O(1)看起来像
numbers[0];
O(n)看起来像
let result = 0;
for (num of numbers){
result += num;
}
O(nlog (n))是这样的
let result = 0;
for (num of numbers){
let index = numbers.length - 1;
while (index > 1){
// yeah, stupid, but couldn't come up with something more useful :-(
result += numbers[index];
index = Math.floor(index/2)
}
}
O(n2)是这样的
let result = 0;
for (outerNum of numbers){
for (innerNum of numbers){
result += outerNum * innerNum;
}
}
其中很多都很容易用非编程的东西来演示,比如洗牌。
对一副牌进行排序通过遍历整副牌找到黑桃a,然后遍历整副牌找到黑桃2,以此类推最坏情况是n^2,如果这副牌已经倒着排序了。你看了52张牌52次。
一般来说,真正糟糕的算法不一定是故意的,它们通常是对其他东西的误用,比如在同一集合上线性重复的另一个方法中调用一个线性方法。
要理解O(n log n),请记住log n意味着log-base-2 (n)。然后看看每一部分:
O(n)是,当你对集合中的每一项进行操作时。
O(log n)是指操作的次数与取2的指数相同,以得到项目的数量。例如,二分搜索必须将集合切成log n的一半。
O(nlogn)是一个组合——你在对集合中的每一项进行二分搜索。高效的排序通常是对每个项目进行一次循环,并在每个循环中进行良好的搜索,以找到放置相关项目或组的正确位置。因此是n * log n。
堂。neufeld的答案非常好,但我可能会分两部分解释它:首先,大多数算法都属于O()的粗略层次结构。然后,你可以看看每一种算法,得出那种时间复杂度的典型算法是怎么做的。
出于实际目的,似乎唯一重要的O()是:
O(1) "constant time" - the time required is independent of the size of the input. As a rough category, I would include algorithms such as hash lookups and Union-Find here, even though neither of those are actually O(1). O(log(n)) "logarithmic" - it gets slower as you get larger inputs, but once your input gets fairly large, it won't change enough to worry about. If your runtime is ok with reasonably-sized data, you can swamp it with as much additional data as you want and it'll still be ok. O(n) "linear" - the more input, the longer it takes, in an even tradeoff. Three times the input size will take roughly three times as long. O(n log(n)) "better than quadratic" - increasing the input size hurts, but it's still manageable. The algorithm is probably decent, it's just that the underlying problem is more difficult (decisions are less localized with respect to the input data) than those problems that can be solved in linear time. If your input sizes are getting up there, don't assume that you could necessarily handle twice the size without changing your architecture around (eg by moving things to overnight batch computations, or not doing things per-frame). It's ok if the input size increases a little bit, though; just watch out for multiples. O(n^2) "quadratic" - it's really only going to work up to a certain size of your input, so pay attention to how big it could get. Also, your algorithm may suck -- think hard to see if there's an O(n log(n)) algorithm that would give you what you need. Once you're here, feel very grateful for the amazing hardware we've been gifted with. Not long ago, what you are trying to do would have been impossible for all practical purposes. O(n^3) "cubic" - not qualitatively all that different from O(n^2). The same comments apply, only more so. There's a decent chance that a more clever algorithm could shave this time down to something smaller, eg O(n^2 log(n)) or O(n^2.8...), but then again, there's a good chance that it won't be worth the trouble. (You're already limited in your practical input size, so the constant factors that may be required for the more clever algorithms will probably swamp their advantages for practical cases. Also, thinking is slow; letting the computer chew on it may save you time overall.) O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot. These problems have a recognizable flavor to them. Your input sizes are capped at a fairly specific hard limit. You'll know quickly whether you fit into that limit.
就是这样。还有很多其他的可能性在这些之间(或大于O(2^n)),但它们在实践中不经常发生,它们与这些中的任何一个在性质上没有太大的不同。三次算法已经有点牵强了;我之所以把它们包括进来,是因为我经常遇到它们,值得一提(例如矩阵乘法)。
这类算法到底发生了什么?我认为你有一个很好的开始,尽管有很多例子不符合这些特征。但对于上述情况,我认为通常是这样的:
O(1) - you're only looking at most at a fixed-size chunk of your input data, and possibly none of it. Example: the maximum of a sorted list. Or your input size is bounded. Example: addition of two numbers. (Note that addition of N numbers is linear time.) O(log n) - each element of your input tells you enough to ignore a large fraction of the rest of the input. Example: when you look at an array element in binary search, its value tells you that you can ignore "half" of your array without looking at any of it. Or similarly, the element you look at gives you enough of a summary of a fraction of the remaining input that you won't need to look at it. There's nothing special about halves, though -- if you can only ignore 10% of your input at each step, it's still logarithmic. O(n) - you do some fixed amount of work per input element. (But see below.) O(n log(n)) - there are a few variants. You can divide the input into two piles (in no more than linear time), solve the problem independently on each pile, and then combine the two piles to form the final solution. The independence of the two piles is key. Example: classic recursive mergesort. Each linear-time pass over the data gets you halfway to your solution. Example: quicksort if you think in terms of the maximum distance of each element to its final sorted position at each partitioning step (and yes, I know that it's actually O(n^2) because of degenerate pivot choices. But practically speaking, it falls into my O(n log(n)) category.) O(n^2) - you have to look at every pair of input elements. Or you don't, but you think you do, and you're using the wrong algorithm. O(n^3) - um... I don't have a snappy characterization of these. It's probably one of: You're multiplying matrices You're looking at every pair of inputs but the operation you do requires looking at all of the inputs again the entire graph structure of your input is relevant O(2^n) - you need to consider every possible subset of your inputs.
这些都不严谨。尤其是线性时间算法(O(n)):我可以举出很多例子,你必须看所有的输入,然后是一半,然后是一半,等等。或者反过来——将输入对折叠在一起,然后对输出进行递归。这些不符合上面的描述,因为你不是只看一次每个输入,但它仍然是线性时间。不过,在99.2%的情况下,线性时间意味着只查看一次每个输入。
有一件事由于某种原因还没有被提及:
当你看到像O(2^n)或O(n^3)这样的算法时,这通常意味着你将不得不接受一个不完美的问题答案,以获得可接受的性能。
在处理优化问题时,像这样的正确解决方案很常见。在合理的时间内给出一个近乎正确的答案,总比在机器腐烂成灰尘很久之后才给出一个正确答案要好。
以国际象棋为例:我不知道正确的解决方案是什么,但它可能是O(n^50)或更糟。从理论上讲,任何计算机都不可能真正计算出正确答案——即使你用宇宙中的每个粒子作为计算元素,在宇宙生命周期内尽可能短的时间内执行一项操作,你仍然会剩下很多零。(量子计算机能否解决这个问题是另一回事。)