有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
总有一个隐藏常数,在O(log n)算法中可以更低。因此,在实际生活数据中,它可以更快地工作。
还有空间问题(比如在烤面包机上运行)。
还有开发人员的时间问题——O(log n)可能更容易实现和验证1000倍。
我很惊讶没有人提到内存绑定应用程序。
可能存在一种算法具有较少的浮点运算,这要么是因为它的复杂性(即O(1) < O(log n)),要么是因为复杂度前面的常数更小(即2n2 < 6n2)。无论如何,如果较低的FLOP算法的内存限制更大,您可能仍然更喜欢具有更多FLOP的算法。
我所说的“内存受限”是指您经常访问的数据经常超出缓存。为了获取这些数据,在对其执行操作之前,必须将内存从实际内存空间拉到缓存中。这个抓取步骤通常非常慢——比您的操作本身慢得多。
因此,如果你的算法需要更多的操作(但这些操作是在已经在缓存中的数据上执行的[因此不需要读取]),它仍然会在实际的walltime方面以更少的操作(必须在缓存外的数据上执行[因此需要读取])胜过你的算法。
考虑一个红黑树。它具有O(log n)的访问、搜索、插入和删除操作。与数组相比,数组的访问权限为O(1),其余操作为O(n)。
因此,对于一个插入、删除或搜索比访问更频繁的应用程序,并且只能在这两种结构之间进行选择,我们更喜欢红黑树。在这种情况下,你可能会说我们更喜欢红黑树更麻烦的O(log n)访问时间。
为什么?因为权限不是我们最关心的。我们正在权衡:应用程序的性能更大程度上受到其他因素的影响。我们允许这种特定的算法受到性能影响,因为我们通过优化其他算法获得了很大的收益。
So the answer to your question is simply this: when the algorithm's growth rate isn't what we want to optimize, when we want to optimize something else. All of the other answers are special cases of this. Sometimes we optimize the run time of other operations. Sometimes we optimize for memory. Sometimes we optimize for security. Sometimes we optimize maintainability. Sometimes we optimize for development time. Even the overriding constant being low enough to matter is optimizing for run time when you know the growth rate of the algorithm isn't the greatest impact on run time. (If your data set was outside this range, you would optimize for the growth rate of the algorithm because it would eventually dominate the constant.) Everything has a cost, and in many cases, we trade the cost of a higher growth rate for the algorithm to optimize something else.
Alistra指出了这一点,但未能提供任何例子,所以我会。
您有一个包含10,000个UPC代码的列表,用于您的商店销售的产品。10位UPC,整数价格(便士价格)和30个字符的收据描述。
O(log N)方法:你有一个排序的列表。ASCII是44字节,Unicode是84字节。或者,将UPC视为int64,将得到42和72字节。10,000条记录——在最高的情况下,您看到的存储空间略低于1mb。
O(1)方法:不存储UPC,而是将其用作数组的一个条目。在最低的情况下,您将看到近三分之一tb的存储空间。
Which approach you use depends on your hardware. On most any reasonable modern configuration you're going to use the log N approach. I can picture the second approach being the right answer if for some reason you're running in an environment where RAM is critically short but you have plenty of mass storage. A third of a terabyte on a disk is no big deal, getting your data in one probe of the disk is worth something. The simple binary approach takes 13 on average. (Note, however, that by clustering your keys you can get this down to a guaranteed 3 reads and in practice you would cache the first one.)
并行执行算法的可能性。
我不知道是否有O(log n)和O(1)类的例子,但对于某些问题,当算法更容易并行执行时,您会选择具有更高复杂度类的算法。
有些算法不能并行化,但复杂度很低。考虑另一种算法,它可以达到相同的结果,并且可以很容易地并行化,但具有更高的复杂度类。当在一台机器上执行时,第二种算法速度较慢,但当在多台机器上执行时,实际执行时间越来越短,而第一种算法无法加快速度。
Yes.
在实际情况下,我们运行了一些使用短字符串和长字符串键进行表查找的测试。
我们使用了std::map, std::unordered_map和一个哈希,该哈希最多对字符串长度进行10次采样(我们的键倾向于guidlike,所以这是体面的),以及一个哈希,对每个字符进行采样(理论上减少了冲突),一个未排序的向量,其中我们进行==比较,以及(如果我没记错的话)一个未排序的向量,其中我们还存储了一个哈希,首先比较哈希,然后比较字符。
这些算法的范围从O(1) (unordered_map)到O(n)(线性搜索)。
对于中等大小的N,通常O(N)优于O(1)。我们怀疑这是因为基于节点的容器需要我们的计算机在内存中跳跃更多,而基于线性的容器则不需要。
O(lgn)存在于两者之间。我不记得是怎么回事了。
性能差异并不大,在更大的数据集上,基于哈希的表现要好得多。所以我们坚持使用基于哈希的无序映射。
实际上,对于合理的n大小,O(lgn)为O(1)。如果你的计算机在你的表中只有40亿的空间,那么O(lgn)的上界是32。(lg(2^32)=32)(在计算机科学中,lg是log based 2的简称)。
在实践中,lg(n)算法比O(1)算法慢,不是因为对数增长因子,而是因为lg(n)部分通常意味着算法有一定程度的复杂性,并且这种复杂性比lg(n)项中的任何“增长”都增加了更大的常数因子。
然而,复杂的O(1)算法(如哈希映射)很容易具有类似或更大的常数因子。
A more general question is if there are situations where one would prefer an O(f(n)) algorithm to an O(g(n)) algorithm even though g(n) << f(n) as n tends to infinity. As others have already mentioned, the answer is clearly "yes" in the case where f(n) = log(n) and g(n) = 1. It is sometimes yes even in the case that f(n) is polynomial but g(n) is exponential. A famous and important example is that of the Simplex Algorithm for solving linear programming problems. In the 1970s it was shown to be O(2^n). Thus, its worse-case behavior is infeasible. But -- its average case behavior is extremely good, even for practical problems with tens of thousands of variables and constraints. In the 1980s, polynomial time algorithms (such a Karmarkar's interior-point algorithm) for linear programming were discovered, but 30 years later the simplex algorithm still seems to be the algorithm of choice (except for certain very large problems). This is for the obvious reason that average-case behavior is often more important than worse-case behavior, but also for a more subtle reason that the simplex algorithm is in some sense more informative (e.g. sensitivity information is easier to extract).
假设您正在嵌入式系统上实现一个黑名单,其中0到1,000,000之间的数字可能被列入黑名单。这就给你留下了两个选择:
使用1,000,000位的bitset 使用黑名单整数的排序数组,并使用二进制搜索来访问它们
对bitset的访问将保证常量访问。从时间复杂度来看,它是最优的。从理论和实践的角度来看(它是O(1),常量开销极低)。
不过,你可能更喜欢第二种解决方案。特别是如果您希望黑名单整数的数量非常小,因为这样内存效率更高。
即使您不为内存稀缺的嵌入式系统开发,我也可以将任意限制从1,000,000增加到1,000,000,000,000,并提出相同的论点。那么bitset将需要大约125G的内存。保证最坏情况复杂度为O(1)可能无法说服您的老板为您提供如此强大的服务器。
在这里,我强烈倾向于二叉搜索(O(log n))或二叉树(O(log n))而不是O(1)位集。在实践中,最坏情况复杂度为O(n)的哈希表可能会击败所有这些算法。
There is a good use case for using a O(log(n)) algorithm instead of an O(1) algorithm that the numerous other answers have ignored: immutability. Hash maps have O(1) puts and gets, assuming good distribution of hash values, but they require mutable state. Immutable tree maps have O(log(n)) puts and gets, which is asymptotically slower. However, immutability can be valuable enough to make up for worse performance and in the case where multiple versions of the map need to be retained, immutability allows you to avoid having to copy the map, which is O(n), and therefore can improve performance.
人们已经回答了你的确切问题,所以我要回答一个稍微不同的问题,人们来这里时可能会想到这个问题。
许多“O(1)时间”算法和数据结构实际上只需要预期的O(1)时间,这意味着它们的平均运行时间是O(1),可能仅在某些假设下。
常见的例子:哈希表,“数组列表”的扩展(也就是动态大小的数组/向量)。
在这种情况下,您可能更喜欢使用保证时间绝对受对数限制的数据结构或算法,即使它们的平均性能可能更差。 一个例子可能是平衡二叉搜索树,它的运行时间平均较差,但在最坏的情况下更好。
选择大O复杂度高的算法而不是大O复杂度低的算法的原因有很多:
most of the time, lower big-O complexity is harder to achieve and requires skilled implementation, a lot of knowledge and a lot of testing. big-O hides the details about a constant: algorithm that performs in 10^5 is better from big-O point of view than 1/10^5 * log(n) (O(1) vs O(log(n)), but for most reasonable n the first one will perform better. For example the best complexity for matrix multiplication is O(n^2.373) but the constant is so high that no (to my knowledge) computational libraries use it. big-O makes sense when you calculate over something big. If you need to sort array of three numbers, it matters really little whether you use O(n*log(n)) or O(n^2) algorithm. sometimes the advantage of the lowercase time complexity can be really negligible. For example there is a data structure tango tree which gives a O(log log N) time complexity to find an item, but there is also a binary tree which finds the same in O(log n). Even for huge numbers of n = 10^20 the difference is negligible. time complexity is not everything. Imagine an algorithm that runs in O(n^2) and requires O(n^2) memory. It might be preferable over O(n^3) time and O(1) space when the n is not really big. The problem is that you can wait for a long time, but highly doubt you can find a RAM big enough to use it with your algorithm parallelization is a good feature in our distributed world. There are algorithms that are easily parallelizable, and there are some that do not parallelize at all. Sometimes it makes sense to run an algorithm on 1000 commodity machines with a higher complexity than using one machine with a slightly better complexity. in some places (security) a complexity can be a requirement. No one wants to have a hash algorithm that can hash blazingly fast (because then other people can bruteforce you way faster) although this is not related to switch of complexity, but some of the security functions should be written in a manner to prevent timing attack. They mostly stay in the same complexity class, but are modified in a way that it always takes worse case to do something. One example is comparing that strings are equal. In most applications it makes sense to break fast if the first bytes are different, but in security you will still wait for the very end to tell the bad news. somebody patented the lower-complexity algorithm and it is more economical for a company to use higher complexity than to pay money. some algorithms adapt well to particular situations. Insertion sort, for example, has an average time-complexity of O(n^2), worse than quicksort or mergesort, but as an online algorithm it can efficiently sort a list of values as they are received (as user input) where most other algorithms can only efficiently operate on a complete list of values.
以下是我的观点:
有时,当算法在特定的硬件环境中运行时,会选择较差的复杂度算法来代替较好的算法。假设我们的O(1)算法非顺序地访问一个非常大的固定大小数组的每个元素来解决我们的问题。然后将该阵列放在机械硬盘驱动器或磁带上。
在这种情况下,O(logn)算法(假设它按顺序访问磁盘)变得更有利。
在n有界且O(1)算法的常数乘子高于log(n)上的界的任意点。例如,在哈希集中存储值是O(1),但可能需要对哈希函数进行昂贵的计算。如果数据项可以简单地进行比较(相对于某些顺序),并且n的边界是这样的,log n明显小于任何一项上的哈希计算,那么存储在平衡二叉树中可能比存储在哈希集中更快。
简单地说:因为系数(与该步骤的设置、存储和执行时间相关的成本)在较小的大o问题中比在较大的大o问题中要大得多。Big-O只是算法可伸缩性的一个衡量标准。
考虑以下来自黑客词典的例子,提出了一个依赖于量子力学的多重世界解释的排序算法:
用量子过程随机排列数组, 如果数组没有排序,毁灭宇宙。 所有剩下的宇宙现在都被排序了(包括你所在的宇宙)。
(来源:http://catb.org/ esr /术语/ html / B / bogo-sort.html)
注意,这个算法的大O是O(n),它击败了迄今为止在一般项目上的任何已知排序算法。线性阶跃的系数也很低(因为它只是一个比较,而不是交换,是线性完成的)。事实上,类似的算法可以用于在多项式时间内解决NP和co-NP中的任何问题,因为每个可能的解(或没有解的可能证明)都可以使用量子过程生成,然后在多项式时间内验证。
然而,在大多数情况下,我们可能不想冒多重世界可能不正确的风险,更不用说实现步骤2的行为仍然是“留给读者的练习”。
当O(1)中的“1”工作单元相对于O(log n)中的工作单元非常高,且期望集大小较小时。例如,如果数组中只有两到三个项,那么计算Dictionary哈希码可能比迭代数组要慢。
or
当O(1)算法中的内存或其他非时间资源需求相对于O(log n)算法非常大时。
给已经好的答案锦上添花。一个实际的例子是postgres数据库中的哈希索引和b树索引。
哈希索引形成一个哈希表索引来访问磁盘上的数据,而btree顾名思义使用的是btree数据结构。
大O时间是O(1) vs O(logN)
目前不鼓励在postgres中使用哈希索引,因为在现实生活中,特别是在数据库系统中,实现无冲突的哈希是非常困难的(可能导致O(N)最坏情况的复杂性),正因为如此,使它们具有崩溃安全性就更加困难了(在postgres中称为提前写日志- WAL)。
在这种情况下进行这种权衡,因为O(logN)对于索引来说已经足够好了,而实现O(1)非常困难,而且时间差并不重要。
在重新设计程序时,发现一个过程用O(1)而不是O(lgN)进行了优化,但如果不是这个程序的瓶颈,就很难理解O(1) alg。这样就不用用O(1)算法了 当O(1)需要大量的内存而你无法提供时,而O(lgN)的时间可以接受。
对于安全应用程序来说,这经常是这样的情况,我们希望设计算法缓慢的问题,以阻止某人过快地获得问题的答案。
这里有几个我能想到的例子。
Password hashing is sometimes made arbitrarily slow in order to make it harder to guess passwords by brute-force. This Information Security post has a bullet point about it (and much more). Bit Coin uses a controllably slow problem for a network of computers to solve in order to "mine" coins. This allows the currency to be mined at a controlled rate by the collective system. Asymmetric ciphers (like RSA) are designed to make decryption without the keys intentionally slow in order to prevent someone else without the private key to crack the encryption. The algorithms are designed to be cracked in hopefully O(2^n) time where n is the bit-length of the key (this is brute force).
在CS的其他地方,快速排序在最坏的情况下是O(n²),但在一般情况下是O(n*log(n))。因此,在分析算法效率时,“大O”分析有时并不是您唯一关心的事情。
有很多很好的答案,其中一些提到了常量因素,输入大小和内存限制,以及许多其他原因,复杂性只是一个理论指导原则,而不是最终决定现实世界是否适合给定的目的或速度。
这里有一个简单而具体的例子来说明这些想法。假设我们想要找出一个数组是否有重复的元素。简单的二次型方法是编写一个嵌套循环:
const hasDuplicate = arr => { 对于(设I = 0;I < arrr .length;我+ +){ For(令j = I + 1;J < arrr .length;j + +) { If (arr[i] === arr[j]) { 返回true; } } } 返回错误; }; console.log(hasDuplicate([1,2,3,4])); console.log(hasDuplicate([1,2,4,4]));
但这可以通过创建一组数据结构(即删除重复项),然后将其大小与数组的长度进行比较,在线性时间内完成:
const hasDuplicate = arr => new Set(arr)。== arrr .length; console.log(hasDuplicate([1,2,3,4])); console.log(hasDuplicate([1,2,4,4]));
大O告诉我们,从时间复杂性的角度来看,新的Set方法将更好地扩展。
然而,事实证明,“天真的”二次元方法有很多大O不能解释的:
没有额外的内存占用 没有堆内存分配(没有新的) 临时Set没有垃圾收集 早期的救助;在已知副本可能位于数组前面的情况下,不需要检查多个元素。
如果我们的用例是在有限的小数组上,我们有一个资源受限的环境和/或其他已知的常见情况属性,允许我们通过基准测试建立嵌套循环在特定工作负载上更快,这可能是一个好主意。
另一方面,也许可以预先创建一次集合并重复使用,在所有查找中摊销其开销成本。
这不可避免地导致可维护性/可读性/优雅性和其他“软”成本。在这种情况下,新的Set()方法可能更具可读性,但通常(如果不是更多的话)要获得更好的复杂性需要付出巨大的工程成本。
创建和维护持久的、有状态的Set结构可能会带来bug、内存/缓存压力、代码复杂性和所有其他设计权衡方式。最优地协商这些权衡是软件工程的一个重要部分,而时间复杂性只是帮助指导这个过程的一个因素。
我还没有看到其他一些例子:
In real-time environments, for example resource-constrained embedded systems, sometimes complexity sacrifices are made (typically related to caches and memory or scheduling) to avoid incurring occasional worst-case penalties that can't be tolerated because they might cause jitter. Also in embedded programming, the size of the code itself can cause cache pressure, impacting memory performance. If an algorithm has worse complexity but will result in massive code size savings, that might be a reason to choose it over an algorithm that's theoretically better. In most implementations of recursive linearithmic algorithms like quicksort, when the array is small enough, a quadratic sorting algorithm like insertion sort is often called because the overhead of recursive function calls on increasingly tiny arrays tends to outweigh the cost of nested loops. Insertion sort is also fast on mostly-sorted arrays as the inner loop won't run much. This answer discusses this in an older version of Chrome's V8 engine before they moved to Timsort.