reddit上的一个帖子提出了一个很有趣的问题:

尾递归函数可以简单地转换为迭代函数。其他的可以通过使用显式堆栈进行转换。每个递归都能转化为迭代吗?

文章中的(counter?)例子是:

(define (num-ways x y)
  (case ((= x 0) 1)
        ((= y 0) 1)
        (num-ways2 x y) ))

(define (num-ways2 x y)
  (+ (num-ways (- x 1) y)
     (num-ways x (- y 1))

当前回答

可以将任何递归算法转换为非递归算法 一个,但通常逻辑要复杂得多,这样做需要 堆栈的使用。事实上,递归本身使用堆栈:the 函数堆栈。

详情:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions

其他回答

原则上,在数据结构和调用堆栈都具有无限状态的语言中,总是可以删除递归并替换为迭代。这是丘奇-图灵论文的一个基本结论。

Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.

我想说是的——一个函数调用只不过是一个goto和一个堆栈操作(粗略地说)。您所需要做的就是模仿调用函数时构建的堆栈,并做一些类似于goto的事情(您可以使用没有显式具有此关键字的语言来模仿goto)。

是的,显式地使用堆栈(但恕我直言,递归读起来要舒服得多)。

是的,总是可以编写一个非递归的版本。简单的解决方案是使用堆栈数据结构并模拟递归执行。

你总能把递归函数变成迭代函数吗?没错,如果没记错的话,丘奇-图灵的论文证明了这一点。通俗地说,它指出递归函数可计算的东西也可迭代模型(如图灵机)计算,反之亦然。论文没有精确地告诉你如何进行转换,但它确实说了这是绝对可能的。

在许多情况下,转换递归函数很容易。Knuth在“计算机编程的艺术”中提供了几种技术。通常,递归计算的东西可以用完全不同的方法在更少的时间和空间内计算出来。经典的例子是斐波那契数或其序列。在你的学位计划中,你肯定遇到过这个问题。

On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.

归根结底,许多函数以递归形式更容易理解、阅读和编写。除非有令人信服的理由,否则您可能不应该(手动地)将这些函数转换为显式迭代算法。你的计算机将正确地处理这项工作。

I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.

在这种情况下,您可以尝试克服对原始语言的政治阻力。您可能会发现自己重新实现Lisp时很糟糕,就像Greenspun(半开玩笑的)第十定律一样。或者你可能只是找到了一种完全不同的解决方法。但无论如何,肯定是有办法的。