有没有什么情况下你更喜欢O(log n)时间复杂度而不是O(1)时间复杂度?还是O(n)到O(log n)

你能举个例子吗?


当前回答

当O(1)中的“1”工作单元相对于O(log n)中的工作单元非常高,且期望集大小较小时。例如,如果数组中只有两到三个项,那么计算Dictionary哈希码可能比迭代数组要慢。

or

当O(1)算法中的内存或其他非时间资源需求相对于O(log n)算法非常大时。

其他回答

在实时情况下,当你需要一个固定的上界时,你会选择一个堆排序,而不是快速排序,因为堆排序的平均行为也是它的最差情况行为。

我在这里的回答是,在随机矩阵的所有行的快速随机加权选择是一个例子,当m不是太大时,复杂度为O(m)的算法比复杂度为O(log(m))的算法更快。

给已经好的答案锦上添花。一个实际的例子是postgres数据库中的哈希索引和b树索引。

哈希索引形成一个哈希表索引来访问磁盘上的数据,而btree顾名思义使用的是btree数据结构。

大O时间是O(1) vs O(logN)

目前不鼓励在postgres中使用哈希索引,因为在现实生活中,特别是在数据库系统中,实现无冲突的哈希是非常困难的(可能导致O(N)最坏情况的复杂性),正因为如此,使它们具有崩溃安全性就更加困难了(在postgres中称为提前写日志- WAL)。

在这种情况下进行这种权衡,因为O(logN)对于索引来说已经足够好了,而实现O(1)非常困难,而且时间差并不重要。

A more general question is if there are situations where one would prefer an O(f(n)) algorithm to an O(g(n)) algorithm even though g(n) << f(n) as n tends to infinity. As others have already mentioned, the answer is clearly "yes" in the case where f(n) = log(n) and g(n) = 1. It is sometimes yes even in the case that f(n) is polynomial but g(n) is exponential. A famous and important example is that of the Simplex Algorithm for solving linear programming problems. In the 1970s it was shown to be O(2^n). Thus, its worse-case behavior is infeasible. But -- its average case behavior is extremely good, even for practical problems with tens of thousands of variables and constraints. In the 1980s, polynomial time algorithms (such a Karmarkar's interior-point algorithm) for linear programming were discovered, but 30 years later the simplex algorithm still seems to be the algorithm of choice (except for certain very large problems). This is for the obvious reason that average-case behavior is often more important than worse-case behavior, but also for a more subtle reason that the simplex algorithm is in some sense more informative (e.g. sensitivity information is easier to extract).

Alistra指出了这一点,但未能提供任何例子,所以我会。

您有一个包含10,000个UPC代码的列表,用于您的商店销售的产品。10位UPC,整数价格(便士价格)和30个字符的收据描述。

O(log N)方法:你有一个排序的列表。ASCII是44字节,Unicode是84字节。或者,将UPC视为int64,将得到42和72字节。10,000条记录——在最高的情况下,您看到的存储空间略低于1mb。

O(1)方法:不存储UPC,而是将其用作数组的一个条目。在最低的情况下,您将看到近三分之一tb的存储空间。

Which approach you use depends on your hardware. On most any reasonable modern configuration you're going to use the log N approach. I can picture the second approach being the right answer if for some reason you're running in an environment where RAM is critically short but you have plenty of mass storage. A third of a terabyte on a disk is no big deal, getting your data in one probe of the disk is worth something. The simple binary approach takes 13 on average. (Note, however, that by clustering your keys you can get this down to a guaranteed 3 reads and in practice you would cache the first one.)