有没有O(1/n)种算法?

或者其他小于O(1)的数?


当前回答

正如已经指出的,除了null函数可能的例外,不可能有O(1/n)个函数,因为所花费的时间必须接近0。

当然,有一些算法,比如康拉德定义的算法,它们至少在某种意义上应该小于O(1)

def get_faster(list):
    how_long = 1/len(list)
    sleep(how_long)

If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.

其他回答

我猜小于O(1)是不可能的。算法所花费的任何时间都称为O(1)。但是对于O(1/n)下面的函数呢。(我知道这个解决方案中已经出现了许多变体,但我猜它们都有一些缺陷(不是主要的,它们很好地解释了这个概念)。这里有一个,只是为了方便讨论:

def 1_by_n(n, C = 10):   #n could be float. C could be any positive number
  if n <= 0.0:           #If input is actually 0, infinite loop.
    while True:
      sleep(1)           #or pass
    return               #This line is not needed and is unreachable
  delta = 0.0001
  itr = delta
  while delta < C/n:
    itr += delta

因此,随着n的增加,函数将花费越来越少的时间。此外,如果输入实际为0,则函数将永远返回。

有人可能会说,这将受到机器精度的限制。因此,由于c eit有一个上界,它是O(1)。但我们也可以绕过它,通过在字符串中输入n和C。加法和比较是对字符串进行的。用这个方法,我们可以把n减小到任意小。因此,即使忽略n = 0,函数的上限也是无界的。

我也相信我们不能说运行时间是O(1/n)。我们应该写成O(1 + 1/n)

大o符号表示算法与典型运行时不同的最坏情况。证明O(1/n)算法是O(1)算法很简单。根据定义, O(1/n)——> T(n) <= 1/n, for all n >= C > 0 O (1 / n)——> T (n) < = 1 / C,因为1 / n <所有n > = 1 / C = C O(1/n)——> O(1),因为大O符号忽略常数(即C的值无关紧要)

inline void O0Algorithm() {}

其余的大多数答案都将大o解释为专门关于算法的运行时间。但是因为问题没有提到它,我认为值得一提的是大o在数值分析中的另一个应用,关于误差。

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

欧拉方法是O(h)或O(1/n)算法,其中h通常被解释为步长n被解释为你划分一个区间的次数。

在实际数值分析应用中,由于浮点舍入误差,也可以有O(1/h)。你的间隔越小,某些算法的实现就会抵消得越多,丢失的有效数字就越多,因此在算法中传播的错误也就越多。

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

我相信量子算法可以通过叠加“一次”进行多次计算……

我怀疑这是一个有用的答案。