有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
当前回答
在重新设计程序时,发现一个过程用O(1)而不是O(lgN)进行了优化,但如果不是这个程序的瓶颈,就很难理解O(1) alg。这样就不用用O(1)算法了 当O(1)需要大量的内存而你无法提供时,而O(lgN)的时间可以接受。
其他回答
选择大O复杂度高的算法而不是大O复杂度低的算法的原因有很多:
most of the time, lower big-O complexity is harder to achieve and requires skilled implementation, a lot of knowledge and a lot of testing. big-O hides the details about a constant: algorithm that performs in 10^5 is better from big-O point of view than 1/10^5 * log(n) (O(1) vs O(log(n)), but for most reasonable n the first one will perform better. For example the best complexity for matrix multiplication is O(n^2.373) but the constant is so high that no (to my knowledge) computational libraries use it. big-O makes sense when you calculate over something big. If you need to sort array of three numbers, it matters really little whether you use O(n*log(n)) or O(n^2) algorithm. sometimes the advantage of the lowercase time complexity can be really negligible. For example there is a data structure tango tree which gives a O(log log N) time complexity to find an item, but there is also a binary tree which finds the same in O(log n). Even for huge numbers of n = 10^20 the difference is negligible. time complexity is not everything. Imagine an algorithm that runs in O(n^2) and requires O(n^2) memory. It might be preferable over O(n^3) time and O(1) space when the n is not really big. The problem is that you can wait for a long time, but highly doubt you can find a RAM big enough to use it with your algorithm parallelization is a good feature in our distributed world. There are algorithms that are easily parallelizable, and there are some that do not parallelize at all. Sometimes it makes sense to run an algorithm on 1000 commodity machines with a higher complexity than using one machine with a slightly better complexity. in some places (security) a complexity can be a requirement. No one wants to have a hash algorithm that can hash blazingly fast (because then other people can bruteforce you way faster) although this is not related to switch of complexity, but some of the security functions should be written in a manner to prevent timing attack. They mostly stay in the same complexity class, but are modified in a way that it always takes worse case to do something. One example is comparing that strings are equal. In most applications it makes sense to break fast if the first bytes are different, but in security you will still wait for the very end to tell the bad news. somebody patented the lower-complexity algorithm and it is more economical for a company to use higher complexity than to pay money. some algorithms adapt well to particular situations. Insertion sort, for example, has an average time-complexity of O(n^2), worse than quicksort or mergesort, but as an online algorithm it can efficiently sort a list of values as they are received (as user input) where most other algorithms can only efficiently operate on a complete list of values.
总有一个隐藏常数,在O(log n)算法中可以更低。因此,在实际生活数据中,它可以更快地工作。
还有空间问题(比如在烤面包机上运行)。
还有开发人员的时间问题——O(log n)可能更容易实现和验证1000倍。
对于安全应用程序来说,这经常是这样的情况,我们希望设计算法缓慢的问题,以阻止某人过快地获得问题的答案。
这里有几个我能想到的例子。
Password hashing is sometimes made arbitrarily slow in order to make it harder to guess passwords by brute-force. This Information Security post has a bullet point about it (and much more). Bit Coin uses a controllably slow problem for a network of computers to solve in order to "mine" coins. This allows the currency to be mined at a controlled rate by the collective system. Asymmetric ciphers (like RSA) are designed to make decryption without the keys intentionally slow in order to prevent someone else without the private key to crack the encryption. The algorithms are designed to be cracked in hopefully O(2^n) time where n is the bit-length of the key (this is brute force).
在CS的其他地方,快速排序在最坏的情况下是O(n²),但在一般情况下是O(n*log(n))。因此,在分析算法效率时,“大O”分析有时并不是您唯一关心的事情。
考虑一个红黑树。它具有O(log n)的访问、搜索、插入和删除操作。与数组相比,数组的访问权限为O(1),其余操作为O(n)。
因此,对于一个插入、删除或搜索比访问更频繁的应用程序,并且只能在这两种结构之间进行选择,我们更喜欢红黑树。在这种情况下,你可能会说我们更喜欢红黑树更麻烦的O(log n)访问时间。
为什么?因为权限不是我们最关心的。我们正在权衡:应用程序的性能更大程度上受到其他因素的影响。我们允许这种特定的算法受到性能影响,因为我们通过优化其他算法获得了很大的收益。
So the answer to your question is simply this: when the algorithm's growth rate isn't what we want to optimize, when we want to optimize something else. All of the other answers are special cases of this. Sometimes we optimize the run time of other operations. Sometimes we optimize for memory. Sometimes we optimize for security. Sometimes we optimize maintainability. Sometimes we optimize for development time. Even the overriding constant being low enough to matter is optimizing for run time when you know the growth rate of the algorithm isn't the greatest impact on run time. (If your data set was outside this range, you would optimize for the growth rate of the algorithm because it would eventually dominate the constant.) Everything has a cost, and in many cases, we trade the cost of a higher growth rate for the algorithm to optimize something else.
当n很小时,O(1)总是很慢。