如果我们使用循环而不是递归,或者反之,在两者都可以达到相同目的的算法中,性能是否会受到影响?检查给定的字符串是否是回文。 我见过许多程序员使用递归作为一种炫耀的手段,当一个简单的迭代算法可以满足要求。 编译器在决定使用什么时起着至关重要的作用吗?
当前回答
In C++ if the recursive function is a templated one, then the compiler has more chance to optimize it, as all the type deduction and function instantiations will occur in compile time. Modern compilers can also inline the function if possible. So if one uses optimization flags like -O3 or -O2 in g++, then recursions may have the chance to be faster than iterations. In iterative codes, the compiler gets less chance to optimize it, as it is already in the more or less optimal state (if written well enough).
在我的例子中,我试图通过使用Armadillo矩阵对象,以递归和迭代的方式来实现矩阵求幂。算法可以在这里找到…https://en.wikipedia.org/wiki/Exponentiation_by_squaring。 我的函数是模板化的,我已经计算了1,000,000个12x12矩阵的10次方。我得到了以下结果:
iterative + optimisation flag -O3 -> 2.79.. sec
recursive + optimisation flag -O3 -> 1.32.. sec
iterative + No-optimisation flag -> 2.83.. sec
recursive + No-optimisation flag -> 4.15.. sec
这些结果是使用gcc-4.8与c++11标志(-std=c++11)和Armadillo 6.1与Intel mkl获得的。英特尔编译器也显示了类似的结果。
其他回答
如果我们使用循环而不是 递归或者反之,在算法中两者都可以达到相同的目的?”
Usually yes if you are writing in a imperative language iteration will run faster than recursion, the performance hit is minimized in problems where the iterative solution requires manipulating Stacks and popping items off of a stack due to the recursive nature of the problem. There are a lot of times where the recursive implementation is much easier to read because the code is much shorter, so you do want to consider maintainability. Especailly in cases where the problem has a recursive nature. So take for example:
河内塔的递归实现:
def TowerOfHanoi(n , source, destination, auxiliary):
if n==1:
print ("Move disk 1 from source",source,"to destination",destination)
return
TowerOfHanoi(n-1, source, auxiliary, destination)
print ("Move disk",n,"from source",source,"to destination",destination)
TowerOfHanoi(n-1, auxiliary, destination, source)
相当短,很容易读。将其与对应的迭代TowerOfHanoi进行比较:
# Python3 program for iterative Tower of Hanoi
import sys
# A structure to represent a stack
class Stack:
# Constructor to set the data of
# the newly created tree node
def __init__(self, capacity):
self.capacity = capacity
self.top = -1
self.array = [0]*capacity
# function to create a stack of given capacity.
def createStack(capacity):
stack = Stack(capacity)
return stack
# Stack is full when top is equal to the last index
def isFull(stack):
return (stack.top == (stack.capacity - 1))
# Stack is empty when top is equal to -1
def isEmpty(stack):
return (stack.top == -1)
# Function to add an item to stack.
# It increases top by 1
def push(stack, item):
if(isFull(stack)):
return
stack.top+=1
stack.array[stack.top] = item
# Function to remove an item from stack.
# It decreases top by 1
def Pop(stack):
if(isEmpty(stack)):
return -sys.maxsize
Top = stack.top
stack.top-=1
return stack.array[Top]
# Function to implement legal
# movement between two poles
def moveDisksBetweenTwoPoles(src, dest, s, d):
pole1TopDisk = Pop(src)
pole2TopDisk = Pop(dest)
# When pole 1 is empty
if (pole1TopDisk == -sys.maxsize):
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When pole2 pole is empty
else if (pole2TopDisk == -sys.maxsize):
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# When top disk of pole1 > top disk of pole2
else if (pole1TopDisk > pole2TopDisk):
push(src, pole1TopDisk)
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When top disk of pole1 < top disk of pole2
else:
push(dest, pole2TopDisk)
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# Function to show the movement of disks
def moveDisk(fromPeg, toPeg, disk):
print("Move the disk", disk, "from '", fromPeg, "' to '", toPeg, "'")
# Function to implement TOH puzzle
def tohIterative(num_of_disks, src, aux, dest):
s, d, a = 'S', 'D', 'A'
# If number of disks is even, then interchange
# destination pole and auxiliary pole
if (num_of_disks % 2 == 0):
temp = d
d = a
a = temp
total_num_of_moves = int(pow(2, num_of_disks) - 1)
# Larger disks will be pushed first
for i in range(num_of_disks, 0, -1):
push(src, i)
for i in range(1, total_num_of_moves + 1):
if (i % 3 == 1):
moveDisksBetweenTwoPoles(src, dest, s, d)
else if (i % 3 == 2):
moveDisksBetweenTwoPoles(src, aux, s, a)
else if (i % 3 == 0):
moveDisksBetweenTwoPoles(aux, dest, a, d)
# Input: number of disks
num_of_disks = 3
# Create three stacks of size 'num_of_disks'
# to hold the disks
src = createStack(num_of_disks)
dest = createStack(num_of_disks)
aux = createStack(num_of_disks)
tohIterative(num_of_disks, src, aux, dest)
Now the first one is way easier to read because suprise suprise shorter code is usually easier to understand than code that is 10 times longer. Sometimes you want to ask yourself is the extra performance gain really worth it? The amount of hours wasted debugging the code. Is the iterative TowerOfHanoi faster than the Recursive TowerOfHanoi? Probably, but not by a big margin. Would I like to program Recursive problems like TowerOfHanoi using iteration? Hell no. Next we have another recursive function the Ackermann function: Using recursion:
if m == 0:
# BASE CASE
return n + 1
elif m > 0 and n == 0:
# RECURSIVE CASE
return ackermann(m - 1, 1)
elif m > 0 and n > 0:
# RECURSIVE CASE
return ackermann(m - 1, ackermann(m, n - 1))
使用迭代:
callStack = [{'m': 2, 'n': 3, 'indentation': 0, 'instrPtr': 'start'}]
returnValue = None
while len(callStack) != 0:
m = callStack[-1]['m']
n = callStack[-1]['n']
indentation = callStack[-1]['indentation']
instrPtr = callStack[-1]['instrPtr']
if instrPtr == 'start':
print('%sackermann(%s, %s)' % (' ' * indentation, m, n))
if m == 0:
# BASE CASE
returnValue = n + 1
callStack.pop()
continue
elif m > 0 and n == 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after first recursive case'
callStack.append({'m': m - 1, 'n': 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif m > 0 and n > 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after second recursive case, inner call'
callStack.append({'m': m, 'n': n - 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after first recursive case':
returnValue = returnValue
callStack.pop()
continue
elif instrPtr == 'after second recursive case, inner call':
callStack[-1]['innerCallResult'] = returnValue
callStack[-1]['instrPtr'] = 'after second recursive case, outer call'
callStack.append({'m': m - 1, 'n': returnValue, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after second recursive case, outer call':
returnValue = returnValue
callStack.pop()
continue
print(returnValue)
再说一次,递归实现更容易理解。所以我的结论是,如果问题本质上是递归的,需要操作堆栈中的项,就使用递归。
堆栈溢出只会发生在编程语言没有内置内存管理....否则,请确保在函数(或函数调用、STDLbs等)中有一些内容。如果没有递归,就不可能有这样的东西……谷歌或SQL,或任何地方一个人必须有效地排序大型数据结构(类)或数据库。
如果你想要遍历文件,递归是一种方法,我敢肯定这就是find * | ?grep *的工作方式。有点像双重递归,特别是管道(但不要像很多人那样做一堆系统调用,如果你要把它放在那里供别人使用的话)。
高级语言,甚至clang/cpp也可以在后台实现相同的功能。
递归的内存开销更大,因为每次递归调用通常都需要将一个内存地址推入堆栈,以便稍后程序可以返回到那个地址。
尽管如此,在许多情况下,递归比循环更自然、更可读——比如在处理树的时候。在这些情况下,我建议坚持使用递归。
我将通过“归纳”设计一个Haskell数据结构来回答你的问题,这是递归的一种“对偶”。然后我会展示这种对偶性是如何带来好的结果的。
我们为简单树引入一个类型:
data Tree a = Branch (Tree a) (Tree a)
| Leaf a
deriving (Eq)
我们可以把这个定义理解为“一棵树是一个分支(包含两棵树)或一个叶子(包含一个数据值)”。叶结点是一种最小的情况。如果树不是叶子,那么它一定是包含两棵树的复合树。这些是唯一的例子。
让我们做一个树:
example :: Tree Int
example = Branch (Leaf 1)
(Branch (Leaf 2)
(Leaf 3))
现在,让我们假设我们想给树中的每个值加1。我们可以通过调用:
addOne :: Tree Int -> Tree Int
addOne (Branch a b) = Branch (addOne a) (addOne b)
addOne (Leaf a) = Leaf (a + 1)
首先,请注意这实际上是一个递归定义。它将数据构造函数Branch和Leaf作为case(因为Leaf是最小值的,这是唯一可能的case),我们可以确定函数将终止。
用迭代风格编写addOne需要什么?循环进入任意数量的分支会是什么样子?
此外,这种递归通常可以用“函子”来分解。我们可以通过定义将树变成函子:
instance Functor Tree where fmap f (Leaf a) = Leaf (f a)
fmap f (Branch a b) = Branch (fmap f a) (fmap f b)
和定义:
addOne' = fmap (+1)
我们可以提出其他递归方案,例如代数数据类型的变形(或折叠)。使用变形法,我们可以这样写:
addOne'' = cata go where
go (Leaf a) = Leaf (a + 1)
go (Branch a b) = Branch a b
比较递归和迭代就像比较十字螺丝刀和一字螺丝刀。在大多数情况下,你可以拆卸任何一个平头的十字螺钉,但如果你使用专为该螺钉设计的螺丝刀,那就更容易了,对吧?
有些算法只是适合递归,因为它们的设计方式(斐波那契数列,遍历树状结构等)。递归使算法更简洁,更容易理解(因此可共享和可重用)。
此外,一些递归算法使用“惰性评估”,这使得它们比迭代算法更有效。这意味着它们只在需要的时候执行昂贵的计算,而不是每次循环运行时都执行。
这应该足够让你开始了。我也会给你找一些文章和例子。
链接1:Haskel vs PHP(递归vs迭代)
下面是一个程序员必须使用PHP处理大型数据集的示例。他展示了在Haskel中使用递归处理是多么容易,但由于PHP没有简单的方法来完成相同的方法,他被迫使用迭代来获得结果。
http://blog.webspecies.co.uk/2011-05-31/lazy-evaluation-with-php.html
链接2:掌握递归
递归的坏名声大多来自于命令式语言的高成本和低效率。本文的作者讨论了如何优化递归算法,使其更快、更有效。他还介绍了如何将传统循环转换为递归函数,以及使用尾部递归的好处。我认为他的结束语总结了我的一些要点:
递归编程为程序员提供了一种更好的组织方式 以一种既可维护又逻辑一致的方式编写代码。” https://developer.ibm.com/articles/l-recurs/
链接3:递归比循环快吗?(回答)
下面是一个与你的问题类似的stackoverflow问题的答案链接。作者指出,许多与递归或循环相关的基准测试都是特定于语言的。命令式语言通常使用循环更快,使用递归更慢,函数式语言反之亦然。我想从这个链接中得到的主要观点是,在语言不可知论/情境盲目的意义上回答这个问题是非常困难的。
递归比循环快吗?