有没有O(1/n)种算法?

或者其他小于O(1)的数?


当前回答

inline void O0Algorithm() {}

其他回答

正如已经指出的,除了null函数可能的例外,不可能有O(1/n)个函数,因为所花费的时间必须接近0。

当然,有一些算法,比如康拉德定义的算法,它们至少在某种意义上应该小于O(1)

def get_faster(list):
    how_long = 1/len(list)
    sleep(how_long)

If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.

你不能低于O(1)但是O(k) k小于N是可能的。我们称之为次线性时间算法。在某些问题中,次线性时间算法只能给出特定问题的近似解。然而,有时,一个近似解就可以了,可能是因为数据集太大了,或者计算所有数据的计算成本太高了。

这个问题并不像有些人认为的那样愚蠢。至少在理论上,当我们采用大O符号的数学定义时,像O(1/n)这样的东西是完全合理的:

现在你可以很容易地用g(x)代替1/x……很明显,上面的定义对于某个f仍然成立。

为了估计渐近运行时增长的目的,这是不太可行的……一个有意义的算法不能随着输入的增长而变得更快。当然,你可以构造一个任意的算法来实现这一点,例如下面这个:

def get_faster(list):
    how_long = (1 / len(list)) * 100000
    sleep(how_long)

显然,随着输入大小的增长,这个函数花费的时间更少,至少直到硬件强制的某个限制(数字的精度,睡眠可以等待的最小时间,处理参数的时间等):这个限制将是一个常数下界,因此实际上上面的函数仍然有运行时O(1)。

但实际上,在现实世界中,当输入大小增加时,运行时可能会减少(至少部分减少)。但是请注意,这些算法不会在O(1)以下表现出运行时行为。不过,它们还是很有趣的。以Horspool的非常简单的文本搜索算法为例。在这里,期望运行时将随着搜索模式长度的增加而减少(但是增加草堆长度将再次增加运行时)。

我相信量子算法可以通过叠加“一次”进行多次计算……

我怀疑这是一个有用的答案。

其余的大多数答案都将大o解释为专门关于算法的运行时间。但是因为问题没有提到它,我认为值得一提的是大o在数值分析中的另一个应用,关于误差。

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

欧拉方法是O(h)或O(1/n)算法,其中h通常被解释为步长n被解释为你划分一个区间的次数。

在实际数值分析应用中,由于浮点舍入误差,也可以有O(1/h)。你的间隔越小,某些算法的实现就会抵消得越多,丢失的有效数字就越多,因此在算法中传播的错误也就越多。

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).