有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)
你能举个例子吗?
当前回答
在关注数据安全的上下文中,如果更复杂的算法对定时攻击有更好的抵抗能力,那么更复杂的算法可能比不太复杂的算法更可取。
其他回答
总有一个隐藏常数,在O(log n)算法中可以更低。因此,在实际生活数据中,它可以更快地工作。
还有空间问题(比如在烤面包机上运行)。
还有开发人员的时间问题——O(log n)可能更容易实现和验证1000倍。
我在这里的回答是,在随机矩阵的所有行的快速随机加权选择是一个例子,当m不是太大时,复杂度为O(m)的算法比复杂度为O(log(m))的算法更快。
对于安全应用程序来说,这经常是这样的情况,我们希望设计算法缓慢的问题,以阻止某人过快地获得问题的答案。
这里有几个我能想到的例子。
Password hashing is sometimes made arbitrarily slow in order to make it harder to guess passwords by brute-force. This Information Security post has a bullet point about it (and much more). Bit Coin uses a controllably slow problem for a network of computers to solve in order to "mine" coins. This allows the currency to be mined at a controlled rate by the collective system. Asymmetric ciphers (like RSA) are designed to make decryption without the keys intentionally slow in order to prevent someone else without the private key to crack the encryption. The algorithms are designed to be cracked in hopefully O(2^n) time where n is the bit-length of the key (this is brute force).
在CS的其他地方,快速排序在最坏的情况下是O(n²),但在一般情况下是O(n*log(n))。因此,在分析算法效率时,“大O”分析有时并不是您唯一关心的事情。
There is a good use case for using a O(log(n)) algorithm instead of an O(1) algorithm that the numerous other answers have ignored: immutability. Hash maps have O(1) puts and gets, assuming good distribution of hash values, but they require mutable state. Immutable tree maps have O(log(n)) puts and gets, which is asymptotically slower. However, immutability can be valuable enough to make up for worse performance and in the case where multiple versions of the map need to be retained, immutability allows you to avoid having to copy the map, which is O(n), and therefore can improve performance.
考虑一个红黑树。它具有O(log n)的访问、搜索、插入和删除操作。与数组相比,数组的访问权限为O(1),其余操作为O(n)。
因此,对于一个插入、删除或搜索比访问更频繁的应用程序,并且只能在这两种结构之间进行选择,我们更喜欢红黑树。在这种情况下,你可能会说我们更喜欢红黑树更麻烦的O(log n)访问时间。
为什么?因为权限不是我们最关心的。我们正在权衡:应用程序的性能更大程度上受到其他因素的影响。我们允许这种特定的算法受到性能影响,因为我们通过优化其他算法获得了很大的收益。
So the answer to your question is simply this: when the algorithm's growth rate isn't what we want to optimize, when we want to optimize something else. All of the other answers are special cases of this. Sometimes we optimize the run time of other operations. Sometimes we optimize for memory. Sometimes we optimize for security. Sometimes we optimize maintainability. Sometimes we optimize for development time. Even the overriding constant being low enough to matter is optimizing for run time when you know the growth rate of the algorithm isn't the greatest impact on run time. (If your data set was outside this range, you would optimize for the growth rate of the algorithm because it would eventually dominate the constant.) Everything has a cost, and in many cases, we trade the cost of a higher growth rate for the algorithm to optimize something else.