这个问题可能听起来相当简单,但这是我与另一位开发人员的争论。

我小心地在我可以的地方进行堆栈分配,而不是堆分配。他和我说话,看着我的肩膀,并评论说,这是没有必要的,因为他们的表现是一样的。

在我的印象中,增加堆栈是一个常数时间,而堆分配的性能取决于当前堆的复杂性,包括分配(找到合适大小的孔)和反分配(缩小孔以减少碎片,如果我没有弄错的话,许多标准库实现在删除过程中需要时间来做这件事)。

在我看来,这可能非常依赖于编译器。在这个项目中,我特别使用了一个用于PPC架构的Metrowerks编译器。对这种组合的深入了解将是最有帮助的,但一般来说,对于GCC和msvc++,情况如何?堆分配不如堆栈分配高效吗?没有区别吗?还是差异如此之小以至于变成了毫无意义的微观优化。


您可以为特定大小的对象编写一个非常高性能的特殊堆分配器。但是,一般的堆分配器性能不是特别好。

我也同意Torbjörn Gyllebring关于对象的预期生命期的观点。好点!


堆栈分配要快得多,因为它所做的只是移动堆栈指针。 使用内存池,您可以从堆分配中获得类似的性能,但这会略微增加复杂性,并带来令人头痛的问题。

此外,堆栈与堆不仅是性能方面的考虑;它还告诉您许多关于对象的预期生存期的信息。


我不认为堆栈分配和堆分配通常是可以互换的。我也希望它们的性能都足以用于一般用途。

我强烈推荐小件物品,哪种更适合分配范围。对于较大的项,堆可能是必要的。

在有多个线程的32位操作系统上,堆栈通常是相当有限的(尽管通常至少是几mb),因为需要分割地址空间,迟早一个线程堆栈会碰到另一个线程堆栈。在单线程系统(至少是Linux glibc单线程)上,限制要小得多,因为堆栈可以不断增长。

在64位操作系统上,有足够的地址空间使线程堆栈相当大。


堆栈要快得多。它在大多数架构上只使用一条指令,在大多数情况下,例如在x86上:

sub esp, 0x10

(将堆栈指针向下移动0x10个字节,从而“分配”这些字节供变量使用。)

当然,堆栈的大小是非常非常有限的,因为你很快就会发现你是否过度使用堆栈分配或尝试进行递归:-)

同样,没有什么理由去优化那些不需要它的代码的性能,比如通过分析来证明。“过早的优化”通常会导致比它本身价值更多的问题。

我的经验法则:如果我知道我将在编译时需要一些数据,并且它的大小在几百字节以下,我就会对它进行堆栈分配。否则我进行堆分配。


通常,堆栈分配只是由堆栈指针寄存器中的减法组成。这比搜索堆快多了。

Sometimes stack allocation requires adding a page(s) of virtual memory. Adding a new page of zeroed memory doesn't require reading a page from disk, so usually this is still going to be tons faster than searching a heap (especially if part of the heap was paged out too). In a rare situation, and you could construct such an example, enough space just happens to be available in part of the heap which is already in RAM, but allocating a new page for the stack has to wait for some other page to get written out to disk. In that rare situation, the heap is faster.


我认为生命期很重要,被分配的东西是否必须以复杂的方式构造。例如,在事务驱动的建模中,您通常必须填写并将带有一堆字段的事务结构传递给操作函数。以OSCI SystemC TLM-2.0标准为例。

在栈上靠近操作调用的地方分配这些资源往往会导致巨大的开销,因为这种构造非常昂贵。好的方法是在堆上分配和重用事务对象,或者通过池或简单的策略,如“这个模块只需要一个事务对象”。

这比在每个操作调用上分配对象快很多倍。

原因很简单,该对象具有昂贵的结构和相当长的使用寿命。

我会说:两种都试试,看看哪种最适合你,因为这真的取决于你代码的行为。


可能堆分配和堆栈分配的最大问题是,堆分配在一般情况下是一个无界操作,因此在有时间问题的地方不能使用它。

对于时间不是问题的其他应用程序,它可能没有那么重要,但如果您分配了很多堆,这将影响执行速度。总是尝试将堆栈用于短期和经常分配的内存(例如在循环中),并尽可能长时间地在应用程序启动期间进行堆分配。


堆栈的容量有限,而堆则不是。一个进程或线程的典型堆栈大约是8K。一旦分配,就不能更改大小。

堆栈变量遵循作用域规则,而堆变量则不遵循。如果你的指令指针超出了一个函数,所有与该函数相关的新变量都会消失。

最重要的是,您无法预先预测整个函数调用链。因此,仅200字节的分配就可能导致堆栈溢出。如果您正在编写一个库,而不是应用程序,这一点尤其重要。


老实说,写一个程序来比较性能是很简单的:

#include <ctime>
#include <iostream>

namespace {
    class empty { }; // even empty classes take up 1 byte of space, minimum
}

int main()
{
    std::clock_t start = std::clock();
    for (int i = 0; i < 100000; ++i)
        empty e;
    std::clock_t duration = std::clock() - start;
    std::cout << "stack allocation took " << duration << " clock ticks\n";
    start = std::clock();
    for (int i = 0; i < 100000; ++i) {
        empty* e = new empty;
        delete e;
    };
    duration = std::clock() - start;
    std::cout << "heap allocation took " << duration << " clock ticks\n";
}

有人说,愚蠢的一致性是小心眼的妖怪。显然,优化编译器是许多程序员心中的妖怪。这个讨论曾经在答案的底部,但人们显然不想读那么远,所以我把它移到这里,以避免遇到我已经回答过的问题。

优化编译器可能会注意到这段代码什么都不做,并可能将其全部优化。这样做是优化器的工作,与优化器斗争是徒劳的。

我建议在编译此代码时关闭优化,因为没有好方法可以欺骗当前正在使用或将来将使用的每个优化器。

任何打开优化器,然后抱怨与它斗争的人都应该受到公众的嘲笑。

如果我关心纳秒精度,我就不会使用std::clock()。如果我想把这些结果作为博士论文发表,我会对此做更大的研究,我可能会比较GCC、Tendra/Ten15、LLVM、Watcom、Borland、Visual c++、Digital Mars、ICC和其他编译器。实际上,堆分配所花费的时间是堆栈分配的数百倍,我认为进一步研究这个问题没有任何用处。

优化器的任务是去除我正在测试的代码。我不认为有任何理由告诉优化器运行,然后试图欺骗优化器不进行实际优化。但如果我看到这样做的价值,我会做以下一项或多项:

Add a data member to empty, and access that data member in the loop; but if I only ever read from the data member the optimizer can do constant folding and remove the loop; if I only ever write to the data member, the optimizer may skip all but the very last iteration of the loop. Additionally, the question wasn't "stack allocation and data access vs. heap allocation and data access." Declare e volatile, but volatile is often compiled incorrectly (PDF). Take the address of e inside the loop (and maybe assign it to a variable that is declared extern and defined in another file). But even in this case, the compiler may notice that -- on the stack at least -- e will always be allocated at the same memory address, and then do constant folding like in (1) above. I get all iterations of the loop, but the object is never actually allocated.

Beyond the obvious, this test is flawed in that it measures both allocation and deallocation, and the original question didn't ask about deallocation. Of course variables allocated on the stack are automatically deallocated at the end of their scope, so not calling delete would (1) skew the numbers (stack deallocation is included in the numbers about stack allocation, so it's only fair to measure heap deallocation) and (2) cause a pretty bad memory leak, unless we keep a reference to the new pointer and call delete after we've got our time measurement.

在我的机器上,在Windows上使用g++ 3.4.4,对于任何小于100000个分配的堆栈和堆分配,我都得到“0个时钟滴答”,即使这样,对于堆栈分配,我也得到“0个时钟滴答”,对于堆分配,我得到“15个时钟滴答”。当我测量10,000,000个分配时,堆栈分配需要31个时钟滴答,堆分配需要1562个时钟滴答。


是的,优化编译器可以省略创建空对象。如果我理解正确的话,它甚至可以省略整个第一个循环。当我将迭代次数增加到10,000,000次时,堆栈分配花费了31个时钟节拍,堆分配花费了1562个时钟节拍。我认为可以肯定地说,在没有告诉g++优化可执行文件的情况下,g++并没有省略构造函数。


在我写这篇文章之后的几年里,Stack Overflow的首选是发布优化构建的性能。总的来说,我认为这是正确的。然而,我仍然认为,当你实际上不希望代码被优化时,让编译器去优化代码是愚蠢的。在我看来,这很像给代客泊车额外付费,却拒绝交出钥匙。在这个特殊情况下,我不希望优化器运行。

使用稍微修改过的基准测试版本(以解决原始程序在每次循环时都没有在堆栈上分配一些东西的问题),并在不进行优化的情况下编译,但链接到发布库(以解决我们不希望包括任何由于链接到调试库而导致的放缓的问题):

#include <cstdio>
#include <chrono>

namespace {
    void on_stack()
    {
        int i;
    }

    void on_heap()
    {
        int* i = new int;
        delete i;
    }
}

int main()
{
    auto begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_stack();
    auto end = std::chrono::system_clock::now();

    std::printf("on_stack took %f seconds\n", std::chrono::duration<double>(end - begin).count());

    begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_heap();
    end = std::chrono::system_clock::now();

    std::printf("on_heap took %f seconds\n", std::chrono::duration<double>(end - begin).count());
    return 0;
}

显示:

on_stack took 2.070003 seconds
on_heap took 57.980081 seconds

在我的系统上,当用命令行编译cl foo。cc /Od /MT /EHsc。

您可能不同意我获得非优化构建的方法。这很好:您可以随意修改基准测试。当我打开优化,我得到:

on_stack took 0.000000 seconds
on_heap took 51.608723 seconds

这并不是因为堆栈分配实际上是瞬时的,而是因为任何半像样的编译器都能注意到on_stack没有做任何有用的事情,可以进行优化。我的Linux笔记本电脑上的GCC也注意到on_heap没有做任何有用的事情,并优化了它:

on_stack took 0.000003 seconds
on_heap took 0.000002 seconds

这不仅仅是堆栈分配更快。您还可以在使用堆栈变量方面获得很多好处。它们有更好的参考位置。最后,折价也便宜得多。


关于这种优化有一个普遍的观点。

您得到的优化与程序计数器实际在该代码中的时间成正比。

如果您对程序计数器进行采样,您将发现它在哪里花费时间,这通常是在代码的一小部分,并且通常是在您无法控制的库例程中。

只有当你发现在对象的堆分配上花费了大量时间时,才会明显地更快地进行堆栈分配。


不要做过早的假设,因为其他应用程序代码和使用可能会影响您的功能。因此,孤立地看待函数是没有用的。

如果你是认真的应用程序,那么VTune它或使用任何类似的分析工具,并查看热点。

糯米


前面提到过,堆栈分配只是移动堆栈指针,即大多数架构上的一条指令。将其与堆分配中通常发生的情况进行比较。

操作系统以链表的形式维护部分空闲内存,有效负载数据由指向空闲部分起始地址的指针和空闲部分的大小组成。为了分配X字节的内存,将遍历链表,并按顺序访问每个音符,检查其大小是否至少为X。当找到大小为P >= X的部分时,将P分成大小为X和P-X的两个部分。更新链表并返回指向第一部分的指针。

如您所见,堆分配取决于许多因素,如您请求的内存大小、内存的碎片程度等等。


一般来说,正如上面几乎每个答案所提到的,堆栈分配比堆分配快。堆栈的push或pop是O(1),而从堆中分配或释放可能需要遍历之前的分配。但是,您通常不应该在紧凑的性能密集型循环中进行分配,因此选择通常取决于其他因素。

做出这样的区分可能会有好处:您可以在堆上使用“堆栈分配器”。严格地说,我认为堆栈分配是指分配的实际方法,而不是分配的位置。如果你在实际的程序堆栈上分配了很多东西,这可能会因为各种各样的原因而变得很糟糕。另一方面,在可能的情况下使用堆栈方法在堆上进行分配是分配方法的最佳选择。

既然你提到了《Metrowerks》和《PPC》,我猜你指的是Wii。在这种情况下,内存是非常宝贵的,在任何可能的情况下使用堆栈分配方法都可以保证您不会在片段上浪费内存。当然,这样做需要比“普通”堆分配方法更加小心。对每种情况进行权衡是明智的。


关于Xbox 360 Xenon处理器上的堆栈与堆分配,我了解到一件有趣的事情,这可能也适用于其他多核系统,即在堆上分配会导致进入临界区以停止所有其他核,这样分配就不会发生冲突。因此,在一个紧密循环中,堆栈分配是固定大小数组的方法,因为它可以防止停顿。

如果您正在为多核/多进程编码,这可能是另一个需要考虑的加速,因为您的堆栈分配将只由运行您的作用域函数的核心可见,而不会影响任何其他内核/ cpu。


除了与堆分配相比具有数量级的性能优势外,堆栈分配对于长时间运行的服务器应用程序更可取。即使是管理得最好的堆最终也会碎片化,导致应用程序性能下降。


尽管堆分配器可以简单地使用基于堆栈的分配技术,但堆栈分配几乎总是与堆分配一样快或更快。

However, there are larger issues when dealing with the overall performance of stack vs. heap based allocation (or in slightly better terms, local vs. external allocation). Usually, heap (external) allocation is slow because it is dealing with many different kinds of allocations and allocation patterns. Reducing the scope of the allocator you are using (making it local to the algorithm/code) will tend to increase performance without any major changes. Adding better structure to your allocation patterns, for example, forcing a LIFO ordering on allocation and deallocation pairs can also improve your allocator's performance by using the allocator in a simpler and more structured way. Or, you can use or write an allocator tuned for your particular allocation pattern; most programs allocate a few discrete sizes frequently, so a heap that is based on a lookaside buffer of a few fixed (preferably known) sizes will perform extremely well. Windows uses its low-fragmentation-heap for this very reason.

另一方面,如果线程太多,在32位内存范围上基于堆栈的分配也充满了危险。堆栈需要一个连续的内存范围,因此线程越多,就需要更多的虚拟地址空间来让它们在没有堆栈溢出的情况下运行。对于64位的程序来说,这(目前)不是问题,但是对于具有大量线程的长时间运行的程序来说,它肯定会造成严重破坏。由于碎片化而导致虚拟地址空间耗尽总是一件令人痛苦的事情。


堆栈分配是一对指令,而我所知道的最快的rtos堆分配器(TLSF)平均使用150条指令。此外,堆栈分配不需要锁,因为它们使用线程本地存储,这是另一个巨大的性能优势。因此,堆栈分配可以快2-3个数量级,这取决于您的多线程环境有多严重。

通常,如果关心性能,堆分配是最后的选择。一个可行的中间选项可以是一个固定池分配器,它也只有几个指令,每次分配开销很小,所以它非常适合固定大小的小对象。缺点是它只适用于固定大小的对象,本质上不是线程安全的,并且有块碎片问题。


正如其他人所说,堆栈分配通常要快得多。

但是,如果复制对象的代价很高,那么如果不小心,在堆栈上分配可能会导致以后使用对象时的巨大性能损失。

例如,如果你在堆栈上分配了一些东西,然后将其放入容器中,那么在堆上分配并将指针存储在容器中会更好(例如使用std::shared_ptr<>)。同样的情况也适用于按值传递或返回对象,以及其他类似的情况。

重点是,尽管在许多情况下堆栈分配通常比堆分配更好,但有时如果你在不最适合计算模型的情况下费尽脑汁进行堆栈分配,它可能会导致比它解决的问题更多的问题。


Remark that the considerations are typically not about speed and performance when choosing stack versus heap allocation. The stack acts like a stack, which means it is well suited for pushing blocks and popping them again, last in, first out. Execution of procedures is also stack-like, last procedure entered is first to be exited. In most programming languages, all the variables needed in a procedure will only be visible during the procedure's execution, thus they are pushed upon entering a procedure and popped off the stack upon exit or return.

现在来看一个不能使用堆栈的例子:

Proc P
{
  pointer x;
  Proc S
  {
    pointer y;
    y = allocate_some_data();
    x = y;
  }
}

If you allocate some memory in procedure S and put it on the stack and then exit S, the allocated data will be popped off the stack. But the variable x in P also pointed to that data, so x is now pointing to some place underneath the stack pointer (assume stack grows downwards) with an unknown content. The content might still be there if the stack pointer is just moved up without clearing the data beneath it, but if you start allocating new data on the stack, the pointer x might actually point to that new data instead.


我想说的是,实际上GCC生成的代码(我还记得VS)不需要做堆栈分配的开销。

对以下函数表示:

  int f(int i)
  {
      if (i > 0)
      {   
          int array[1000];
      }   
  }

下面是生成的代码:

  __Z1fi:
  Leh_func_begin1:
      pushq   %rbp
  Ltmp0:
      movq    %rsp, %rbp
  Ltmp1:
      subq    $**3880**, %rsp <--- here we have the array allocated, even the if doesn't excited.
  Ltmp2:
      movl    %edi, -4(%rbp)
      movl    -8(%rbp), %eax
      addq    $3880, %rsp
      popq    %rbp
      ret 
  Leh_func_end1:

所以无论你有多少局部变量(甚至在if或switch内部),只有3880会改变为另一个值。除非你没有局部变量,否则这条指令只需要执行。所以分配局部变量没有开销。


class Foo {
public:
    Foo(int a) {

    }
}
int func() {
    int a1, a2;
    std::cin >> a1;
    std::cin >> a2;

    Foo f1(a1);
    __asm push a1;
    __asm lea ecx, [this];
    __asm call Foo::Foo(int);

    Foo* f2 = new Foo(a2);
    __asm push sizeof(Foo);
    __asm call operator new;//there's a lot instruction here(depends on system)
    __asm push a2;
    __asm call Foo::Foo(int);

    delete f2;
}

It would be like this in asm. When you're in func, the f1 and pointer f2 has been allocated on stack (automated storage). And by the way, Foo f1(a1) has no instruction effects on stack pointer (esp),It has been allocated, if func wants get the member f1, it's instruction is something like this: lea ecx [ebp+f1], call Foo::SomeFunc(). Another thing the stack allocate may make someone think the memory is something like FIFO, the FIFO just happened when you go into some function, if you are in the function and allocate something like int i = 0, there no push happened.


c++语言特有的关注点

首先,c++中没有所谓的“堆栈”或“堆”分配。如果你谈论的是块作用域中的自动对象,它们甚至没有被“分配”。(顺便说一下,C语言中的自动存储时间肯定与“分配”不一样;后者在c++中是“动态的”。)动态分配的内存在自由存储区上,而不一定在“堆”上,尽管后者通常是(默认的)实现。

Although as per the abstract machine semantic rules, automatic objects still occupy memory, a conforming C++ implementation is allowed to ignore this fact when it can prove this does not matter (when it does not change the observable behavior of the program). This permission is granted by the as-if rule in ISO C++, which is also the general clause enabling the usual optimizations (and there is also an almost same rule in ISO C). Besides the as-if rule, ISO C++ also has copy elision rules to allow omission of specific creations of objects. The constructor and destructor calls involved are thereby omitted. As a result, the automatic objects (if any) in these constructors and destructors are also eliminated, compared to naive abstract semantics implied by the source code.

另一方面,免费商店的分配绝对是设计上的“分配”。在ISO c++规则下,这样的分配可以通过调用分配函数来实现。然而,自ISO c++ 14以来,有一个新的(非as-if)规则允许在特定情况下合并全局分配函数(即::operator new)调用。因此,部分动态分配操作也可以像自动对象一样是无操作的。

分配函数用于分配内存资源。可以使用分配器根据分配进一步分配对象。对于自动对象,它们是直接呈现的——尽管底层内存可以被访问,并被用来为其他对象提供内存(通过放置new),但这作为自由存储没有太大意义,因为没有办法将资源移动到其他地方。

所有其他问题都超出了c++的范围。尽管如此,它们仍然是重要的。

c++的实现

C++ does not expose reified activation records or some sorts of first-class continuations (e.g. by the famous call/cc), there is no way to directly manipulate the activation record frames - where the implementation need to place the automatic objects to. Once there is no (non-portable) interoperations with the underlying implementation ("native" non-portable code, such as inline assembly code), an omission of the underlying allocation of the frames can be quite trivial. For example, when the called function is inlined, the frames can be effectively merged into others, so there is no way to show what is the "allocation".

However, once interops are respected, things are getting complex. A typical implementation of C++ will expose the ability of interop on ISA (instruction-set architecture) with some calling conventions as the binary boundary shared with the native (ISA-level machine) code. This would be explicitly costly, notably, when maintaining the stack pointer, which is often directly held by an ISA-level register (with probably specific machine instructions to access). The stack pointer indicates the boundary of the top frame of the (currently active) function call. When a function call is entered, a new frame is needed and the stack pointer is added or subtracted (depending on the convention of ISA) by a value not less than the required frame size. The frame is then said allocated when the stack pointer after the operations. Parameters of functions may be passed onto the stack frame as well, depending on the calling convention used for the call. The frame can hold the memory of automatic objects (probably including the parameters) specified by the C++ source code. In the sense of such implementations, these objects are "allocated". When the control exits the function call, the frame is no longer needed, it is usually released by restoring the stack pointer back to the state before the call (saved previously according to the calling convention). This can be viewed as "deallocation". These operations make the activation record effectively a LIFO data structure, so it is often called "the (call) stack". The stack pointer effectively indicates the top position of the stack.

Because most C++ implementations (particularly the ones targeting ISA-level native code and using the assembly language as its immediate output) use similar strategies like this, such a confusing "allocation" scheme is popular. Such allocations (as well as deallocations) do spend machine cycles, and it can be expensive when the (non-optimized) calls occur frequently, even though modern CPU microarchitectures can have complex optimizations implemented by hardware for the common code pattern (like using a stack engine in implementing PUSH/POP instructions).

But anyway, in general, it is true that the cost of stack frame allocation is significantly less than a call to an allocation function operating the free store (unless it is totally optimized away), which itself can have hundreds of (if not millions of :-) operations to maintain the stack pointer and other states. Allocation functions are typically based on API provided by the hosted environment (e.g. runtime provided by the OS). Different to the purpose of holding automatic objects for functions calls, such allocations are general-purpose, so they will not have frame structure like a stack. Traditionally, they allocate space from the pool storage called the heap (or several heaps). Different from the "stack", the concept "heap" here does not indicate the data structure being used; it is derived from early language implementations decades ago. (BTW, the call stack is usually allocated with fixed or user-specified size from the heap by the environment in program/thread startup.) The nature of use cases makes allocations and deallocations from a heap far more complicated (than pushing/poppoing of stack frames), and hardly possible to be directly optimized by hardware.

对内存访问的影响

The usual stack allocation always puts the new frame on the top, so it has a quite good locality. This is friendly to the cache. OTOH, memory allocated randomly in the free store has no such property. Since ISO C++17, there are pool resource templates provided by <memory_resource>. The direct purpose of such an interface is to allow the results of consecutive allocations being close together in memory. This acknowledges the fact that this strategy is generally good for performance with contemporary implementations, e.g. being friendly to cache in modern architectures. This is about the performance of access rather than allocation, though.

并发性

Expectation of concurrent access to memory can have different effects between the stack and heaps. A call stack is usually exclusively owned by one thread of execution in a typical C++ implementation. OTOH, heaps are often shared among the threads in a process. For such heaps, the allocation and deallocation functions have to protect the shared internal administrative data structure from the data race. As a result, heap allocations and deallocations may have additional overhead due to internal synchronization operations.

空间效率

由于用例和内部数据结构的性质,堆可能会受到内部内存碎片的影响,而堆栈则不会。这对内存分配的性能没有直接影响,但在虚拟内存的系统中,低空间效率可能会降低内存访问的整体性能。当HDD被用作物理内存交换时,这种情况尤其糟糕。它会导致相当长的延迟——有时是数十亿个周期。

堆栈分配的限制

尽管在现实中,堆栈分配在性能上通常优于堆分配,但这并不意味着堆栈分配总是可以取代堆分配。

首先,在ISO c++中无法以可移植的方式在运行时指定大小的堆栈上分配空间。诸如alloca和g++的VLA(变长数组)等实现提供了扩展,但是有理由避免使用它们。(IIRC, Linux源代码最近删除了VLA的使用)(还要注意ISO C99确实有强制的VLA,但ISO C11是可选的支持。)

Second, there is no reliable and portable way to detect stack space exhaustion. This is often called stack overflow (hmm, the etymology of this site), but probably more accurately, stack overrun. In reality, this often causes invalid memory access, and the state of the program is then corrupted (... or maybe worse, a security hole). In fact, ISO C++ has no concept of "the stack" and makes it undefined behavior when the resource is exhausted. Be cautious about how much room should be left for automatic objects.

如果堆栈空间用完,则堆栈中分配的对象太多,这可能是由于过多的活动函数调用或不恰当地使用自动对象造成的。这种情况可能表明存在错误,例如没有正确退出条件的递归函数调用。

Nevertheless, deep recursive calls are sometimes desired. In implementations of languages requiring support of unbound active calls (where the call depth only limited by total memory), it is impossible to use the (contemporary) native call stack directly as the target language activation record like typical C++ implementations. To work around the problem, alternative ways of the construction of activation records are needed. For example, SML/NJ explicitly allocates frames on the heap and uses cactus stacks. The complicated allocation of such activation record frames is usually not as fast as the call stack frames. However, if such languages are implemented further with the guarantee of proper tail recursion, the direct stack allocation in the object language (that is, the "object" in the language does not stored as references, but native primitive values which can be one-to-one mapped to unshared C++ objects) is even more complicated with more performance penalty in general. When using C++ to implement such languages, it is difficult to estimate the performance impacts.


自然,堆栈分配更快。使用堆分配,分配器必须在某处找到空闲内存。使用堆栈分配,编译器只需要给你的函数一个更大的堆栈框架就可以完成,这意味着分配完全不需要花费时间。(我假设您没有使用alloca或任何东西来动态分配堆栈空间,但即使这样,它也非常快。)

但是,您必须警惕隐藏的动态分配。例如:

void some_func()
{
    std::vector<int> my_vector(0x1000);
    // Do stuff with the vector...
}

您可能认为这会在堆栈上分配4 KiB,但您错了。它在堆栈上分配vector实例,但该vector实例又在堆上分配它的4 KiB,因为vector总是在堆上分配它的内部数组(至少除非您指定了一个自定义分配器,这里我不会深入讨论)。如果您希望使用类似stl的容器在堆栈上进行分配,则可能需要std::array或boost::static_vector(由外部boost库提供)。