这个问题可能听起来相当简单,但这是我与另一位开发人员的争论。

我小心地在我可以的地方进行堆栈分配,而不是堆分配。他和我说话,看着我的肩膀,并评论说,这是没有必要的,因为他们的表现是一样的。

在我的印象中,增加堆栈是一个常数时间,而堆分配的性能取决于当前堆的复杂性,包括分配(找到合适大小的孔)和反分配(缩小孔以减少碎片,如果我没有弄错的话,许多标准库实现在删除过程中需要时间来做这件事)。

在我看来,这可能非常依赖于编译器。在这个项目中,我特别使用了一个用于PPC架构的Metrowerks编译器。对这种组合的深入了解将是最有帮助的,但一般来说,对于GCC和msvc++,情况如何?堆分配不如堆栈分配高效吗?没有区别吗?还是差异如此之小以至于变成了毫无意义的微观优化。


当前回答

老实说,写一个程序来比较性能是很简单的:

#include <ctime>
#include <iostream>

namespace {
    class empty { }; // even empty classes take up 1 byte of space, minimum
}

int main()
{
    std::clock_t start = std::clock();
    for (int i = 0; i < 100000; ++i)
        empty e;
    std::clock_t duration = std::clock() - start;
    std::cout << "stack allocation took " << duration << " clock ticks\n";
    start = std::clock();
    for (int i = 0; i < 100000; ++i) {
        empty* e = new empty;
        delete e;
    };
    duration = std::clock() - start;
    std::cout << "heap allocation took " << duration << " clock ticks\n";
}

有人说,愚蠢的一致性是小心眼的妖怪。显然,优化编译器是许多程序员心中的妖怪。这个讨论曾经在答案的底部,但人们显然不想读那么远,所以我把它移到这里,以避免遇到我已经回答过的问题。

优化编译器可能会注意到这段代码什么都不做,并可能将其全部优化。这样做是优化器的工作,与优化器斗争是徒劳的。

我建议在编译此代码时关闭优化,因为没有好方法可以欺骗当前正在使用或将来将使用的每个优化器。

任何打开优化器,然后抱怨与它斗争的人都应该受到公众的嘲笑。

如果我关心纳秒精度,我就不会使用std::clock()。如果我想把这些结果作为博士论文发表,我会对此做更大的研究,我可能会比较GCC、Tendra/Ten15、LLVM、Watcom、Borland、Visual c++、Digital Mars、ICC和其他编译器。实际上,堆分配所花费的时间是堆栈分配的数百倍,我认为进一步研究这个问题没有任何用处。

优化器的任务是去除我正在测试的代码。我不认为有任何理由告诉优化器运行,然后试图欺骗优化器不进行实际优化。但如果我看到这样做的价值,我会做以下一项或多项:

Add a data member to empty, and access that data member in the loop; but if I only ever read from the data member the optimizer can do constant folding and remove the loop; if I only ever write to the data member, the optimizer may skip all but the very last iteration of the loop. Additionally, the question wasn't "stack allocation and data access vs. heap allocation and data access." Declare e volatile, but volatile is often compiled incorrectly (PDF). Take the address of e inside the loop (and maybe assign it to a variable that is declared extern and defined in another file). But even in this case, the compiler may notice that -- on the stack at least -- e will always be allocated at the same memory address, and then do constant folding like in (1) above. I get all iterations of the loop, but the object is never actually allocated.

Beyond the obvious, this test is flawed in that it measures both allocation and deallocation, and the original question didn't ask about deallocation. Of course variables allocated on the stack are automatically deallocated at the end of their scope, so not calling delete would (1) skew the numbers (stack deallocation is included in the numbers about stack allocation, so it's only fair to measure heap deallocation) and (2) cause a pretty bad memory leak, unless we keep a reference to the new pointer and call delete after we've got our time measurement.

在我的机器上,在Windows上使用g++ 3.4.4,对于任何小于100000个分配的堆栈和堆分配,我都得到“0个时钟滴答”,即使这样,对于堆栈分配,我也得到“0个时钟滴答”,对于堆分配,我得到“15个时钟滴答”。当我测量10,000,000个分配时,堆栈分配需要31个时钟滴答,堆分配需要1562个时钟滴答。


是的,优化编译器可以省略创建空对象。如果我理解正确的话,它甚至可以省略整个第一个循环。当我将迭代次数增加到10,000,000次时,堆栈分配花费了31个时钟节拍,堆分配花费了1562个时钟节拍。我认为可以肯定地说,在没有告诉g++优化可执行文件的情况下,g++并没有省略构造函数。


在我写这篇文章之后的几年里,Stack Overflow的首选是发布优化构建的性能。总的来说,我认为这是正确的。然而,我仍然认为,当你实际上不希望代码被优化时,让编译器去优化代码是愚蠢的。在我看来,这很像给代客泊车额外付费,却拒绝交出钥匙。在这个特殊情况下,我不希望优化器运行。

使用稍微修改过的基准测试版本(以解决原始程序在每次循环时都没有在堆栈上分配一些东西的问题),并在不进行优化的情况下编译,但链接到发布库(以解决我们不希望包括任何由于链接到调试库而导致的放缓的问题):

#include <cstdio>
#include <chrono>

namespace {
    void on_stack()
    {
        int i;
    }

    void on_heap()
    {
        int* i = new int;
        delete i;
    }
}

int main()
{
    auto begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_stack();
    auto end = std::chrono::system_clock::now();

    std::printf("on_stack took %f seconds\n", std::chrono::duration<double>(end - begin).count());

    begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_heap();
    end = std::chrono::system_clock::now();

    std::printf("on_heap took %f seconds\n", std::chrono::duration<double>(end - begin).count());
    return 0;
}

显示:

on_stack took 2.070003 seconds
on_heap took 57.980081 seconds

在我的系统上,当用命令行编译cl foo。cc /Od /MT /EHsc。

您可能不同意我获得非优化构建的方法。这很好:您可以随意修改基准测试。当我打开优化,我得到:

on_stack took 0.000000 seconds
on_heap took 51.608723 seconds

这并不是因为堆栈分配实际上是瞬时的,而是因为任何半像样的编译器都能注意到on_stack没有做任何有用的事情,可以进行优化。我的Linux笔记本电脑上的GCC也注意到on_heap没有做任何有用的事情,并优化了它:

on_stack took 0.000003 seconds
on_heap took 0.000002 seconds

其他回答

您可以为特定大小的对象编写一个非常高性能的特殊堆分配器。但是,一般的堆分配器性能不是特别好。

我也同意Torbjörn Gyllebring关于对象的预期生命期的观点。好点!

class Foo {
public:
    Foo(int a) {

    }
}
int func() {
    int a1, a2;
    std::cin >> a1;
    std::cin >> a2;

    Foo f1(a1);
    __asm push a1;
    __asm lea ecx, [this];
    __asm call Foo::Foo(int);

    Foo* f2 = new Foo(a2);
    __asm push sizeof(Foo);
    __asm call operator new;//there's a lot instruction here(depends on system)
    __asm push a2;
    __asm call Foo::Foo(int);

    delete f2;
}

It would be like this in asm. When you're in func, the f1 and pointer f2 has been allocated on stack (automated storage). And by the way, Foo f1(a1) has no instruction effects on stack pointer (esp),It has been allocated, if func wants get the member f1, it's instruction is something like this: lea ecx [ebp+f1], call Foo::SomeFunc(). Another thing the stack allocate may make someone think the memory is something like FIFO, the FIFO just happened when you go into some function, if you are in the function and allocate something like int i = 0, there no push happened.

Remark that the considerations are typically not about speed and performance when choosing stack versus heap allocation. The stack acts like a stack, which means it is well suited for pushing blocks and popping them again, last in, first out. Execution of procedures is also stack-like, last procedure entered is first to be exited. In most programming languages, all the variables needed in a procedure will only be visible during the procedure's execution, thus they are pushed upon entering a procedure and popped off the stack upon exit or return.

现在来看一个不能使用堆栈的例子:

Proc P
{
  pointer x;
  Proc S
  {
    pointer y;
    y = allocate_some_data();
    x = y;
  }
}

If you allocate some memory in procedure S and put it on the stack and then exit S, the allocated data will be popped off the stack. But the variable x in P also pointed to that data, so x is now pointing to some place underneath the stack pointer (assume stack grows downwards) with an unknown content. The content might still be there if the stack pointer is just moved up without clearing the data beneath it, but if you start allocating new data on the stack, the pointer x might actually point to that new data instead.

老实说,写一个程序来比较性能是很简单的:

#include <ctime>
#include <iostream>

namespace {
    class empty { }; // even empty classes take up 1 byte of space, minimum
}

int main()
{
    std::clock_t start = std::clock();
    for (int i = 0; i < 100000; ++i)
        empty e;
    std::clock_t duration = std::clock() - start;
    std::cout << "stack allocation took " << duration << " clock ticks\n";
    start = std::clock();
    for (int i = 0; i < 100000; ++i) {
        empty* e = new empty;
        delete e;
    };
    duration = std::clock() - start;
    std::cout << "heap allocation took " << duration << " clock ticks\n";
}

有人说,愚蠢的一致性是小心眼的妖怪。显然,优化编译器是许多程序员心中的妖怪。这个讨论曾经在答案的底部,但人们显然不想读那么远,所以我把它移到这里,以避免遇到我已经回答过的问题。

优化编译器可能会注意到这段代码什么都不做,并可能将其全部优化。这样做是优化器的工作,与优化器斗争是徒劳的。

我建议在编译此代码时关闭优化,因为没有好方法可以欺骗当前正在使用或将来将使用的每个优化器。

任何打开优化器,然后抱怨与它斗争的人都应该受到公众的嘲笑。

如果我关心纳秒精度,我就不会使用std::clock()。如果我想把这些结果作为博士论文发表,我会对此做更大的研究,我可能会比较GCC、Tendra/Ten15、LLVM、Watcom、Borland、Visual c++、Digital Mars、ICC和其他编译器。实际上,堆分配所花费的时间是堆栈分配的数百倍,我认为进一步研究这个问题没有任何用处。

优化器的任务是去除我正在测试的代码。我不认为有任何理由告诉优化器运行,然后试图欺骗优化器不进行实际优化。但如果我看到这样做的价值,我会做以下一项或多项:

Add a data member to empty, and access that data member in the loop; but if I only ever read from the data member the optimizer can do constant folding and remove the loop; if I only ever write to the data member, the optimizer may skip all but the very last iteration of the loop. Additionally, the question wasn't "stack allocation and data access vs. heap allocation and data access." Declare e volatile, but volatile is often compiled incorrectly (PDF). Take the address of e inside the loop (and maybe assign it to a variable that is declared extern and defined in another file). But even in this case, the compiler may notice that -- on the stack at least -- e will always be allocated at the same memory address, and then do constant folding like in (1) above. I get all iterations of the loop, but the object is never actually allocated.

Beyond the obvious, this test is flawed in that it measures both allocation and deallocation, and the original question didn't ask about deallocation. Of course variables allocated on the stack are automatically deallocated at the end of their scope, so not calling delete would (1) skew the numbers (stack deallocation is included in the numbers about stack allocation, so it's only fair to measure heap deallocation) and (2) cause a pretty bad memory leak, unless we keep a reference to the new pointer and call delete after we've got our time measurement.

在我的机器上,在Windows上使用g++ 3.4.4,对于任何小于100000个分配的堆栈和堆分配,我都得到“0个时钟滴答”,即使这样,对于堆栈分配,我也得到“0个时钟滴答”,对于堆分配,我得到“15个时钟滴答”。当我测量10,000,000个分配时,堆栈分配需要31个时钟滴答,堆分配需要1562个时钟滴答。


是的,优化编译器可以省略创建空对象。如果我理解正确的话,它甚至可以省略整个第一个循环。当我将迭代次数增加到10,000,000次时,堆栈分配花费了31个时钟节拍,堆分配花费了1562个时钟节拍。我认为可以肯定地说,在没有告诉g++优化可执行文件的情况下,g++并没有省略构造函数。


在我写这篇文章之后的几年里,Stack Overflow的首选是发布优化构建的性能。总的来说,我认为这是正确的。然而,我仍然认为,当你实际上不希望代码被优化时,让编译器去优化代码是愚蠢的。在我看来,这很像给代客泊车额外付费,却拒绝交出钥匙。在这个特殊情况下,我不希望优化器运行。

使用稍微修改过的基准测试版本(以解决原始程序在每次循环时都没有在堆栈上分配一些东西的问题),并在不进行优化的情况下编译,但链接到发布库(以解决我们不希望包括任何由于链接到调试库而导致的放缓的问题):

#include <cstdio>
#include <chrono>

namespace {
    void on_stack()
    {
        int i;
    }

    void on_heap()
    {
        int* i = new int;
        delete i;
    }
}

int main()
{
    auto begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_stack();
    auto end = std::chrono::system_clock::now();

    std::printf("on_stack took %f seconds\n", std::chrono::duration<double>(end - begin).count());

    begin = std::chrono::system_clock::now();
    for (int i = 0; i < 1000000000; ++i)
        on_heap();
    end = std::chrono::system_clock::now();

    std::printf("on_heap took %f seconds\n", std::chrono::duration<double>(end - begin).count());
    return 0;
}

显示:

on_stack took 2.070003 seconds
on_heap took 57.980081 seconds

在我的系统上,当用命令行编译cl foo。cc /Od /MT /EHsc。

您可能不同意我获得非优化构建的方法。这很好:您可以随意修改基准测试。当我打开优化,我得到:

on_stack took 0.000000 seconds
on_heap took 51.608723 seconds

这并不是因为堆栈分配实际上是瞬时的,而是因为任何半像样的编译器都能注意到on_stack没有做任何有用的事情,可以进行优化。我的Linux笔记本电脑上的GCC也注意到on_heap没有做任何有用的事情,并优化了它:

on_stack took 0.000003 seconds
on_heap took 0.000002 seconds

我认为生命期很重要,被分配的东西是否必须以复杂的方式构造。例如,在事务驱动的建模中,您通常必须填写并将带有一堆字段的事务结构传递给操作函数。以OSCI SystemC TLM-2.0标准为例。

在栈上靠近操作调用的地方分配这些资源往往会导致巨大的开销,因为这种构造非常昂贵。好的方法是在堆上分配和重用事务对象,或者通过池或简单的策略,如“这个模块只需要一个事务对象”。

这比在每个操作调用上分配对象快很多倍。

原因很简单,该对象具有昂贵的结构和相当长的使用寿命。

我会说:两种都试试,看看哪种最适合你,因为这真的取决于你代码的行为。