老实说,写一个程序来比较性能是很简单的:
#include <ctime>
#include <iostream>
namespace {
class empty { }; // even empty classes take up 1 byte of space, minimum
}
int main()
{
std::clock_t start = std::clock();
for (int i = 0; i < 100000; ++i)
empty e;
std::clock_t duration = std::clock() - start;
std::cout << "stack allocation took " << duration << " clock ticks\n";
start = std::clock();
for (int i = 0; i < 100000; ++i) {
empty* e = new empty;
delete e;
};
duration = std::clock() - start;
std::cout << "heap allocation took " << duration << " clock ticks\n";
}
有人说,愚蠢的一致性是小心眼的妖怪。显然,优化编译器是许多程序员心中的妖怪。这个讨论曾经在答案的底部,但人们显然不想读那么远,所以我把它移到这里,以避免遇到我已经回答过的问题。
优化编译器可能会注意到这段代码什么都不做,并可能将其全部优化。这样做是优化器的工作,与优化器斗争是徒劳的。
我建议在编译此代码时关闭优化,因为没有好方法可以欺骗当前正在使用或将来将使用的每个优化器。
任何打开优化器,然后抱怨与它斗争的人都应该受到公众的嘲笑。
如果我关心纳秒精度,我就不会使用std::clock()。如果我想把这些结果作为博士论文发表,我会对此做更大的研究,我可能会比较GCC、Tendra/Ten15、LLVM、Watcom、Borland、Visual c++、Digital Mars、ICC和其他编译器。实际上,堆分配所花费的时间是堆栈分配的数百倍,我认为进一步研究这个问题没有任何用处。
优化器的任务是去除我正在测试的代码。我不认为有任何理由告诉优化器运行,然后试图欺骗优化器不进行实际优化。但如果我看到这样做的价值,我会做以下一项或多项:
Add a data member to empty, and access that data member in the loop; but if I only ever read from the data member the optimizer can do constant folding and remove the loop; if I only ever write to the data member, the optimizer may skip all but the very last iteration of the loop. Additionally, the question wasn't "stack allocation and data access vs. heap allocation and data access."
Declare e volatile, but volatile is often compiled incorrectly (PDF).
Take the address of e inside the loop (and maybe assign it to a variable that is declared extern and defined in another file). But even in this case, the compiler may notice that -- on the stack at least -- e will always be allocated at the same memory address, and then do constant folding like in (1) above. I get all iterations of the loop, but the object is never actually allocated.
Beyond the obvious, this test is flawed in that it measures both allocation and deallocation, and the original question didn't ask about deallocation. Of course variables allocated on the stack are automatically deallocated at the end of their scope, so not calling delete would (1) skew the numbers (stack deallocation is included in the numbers about stack allocation, so it's only fair to measure heap deallocation) and (2) cause a pretty bad memory leak, unless we keep a reference to the new pointer and call delete after we've got our time measurement.
在我的机器上,在Windows上使用g++ 3.4.4,对于任何小于100000个分配的堆栈和堆分配,我都得到“0个时钟滴答”,即使这样,对于堆栈分配,我也得到“0个时钟滴答”,对于堆分配,我得到“15个时钟滴答”。当我测量10,000,000个分配时,堆栈分配需要31个时钟滴答,堆分配需要1562个时钟滴答。
是的,优化编译器可以省略创建空对象。如果我理解正确的话,它甚至可以省略整个第一个循环。当我将迭代次数增加到10,000,000次时,堆栈分配花费了31个时钟节拍,堆分配花费了1562个时钟节拍。我认为可以肯定地说,在没有告诉g++优化可执行文件的情况下,g++并没有省略构造函数。
在我写这篇文章之后的几年里,Stack Overflow的首选是发布优化构建的性能。总的来说,我认为这是正确的。然而,我仍然认为,当你实际上不希望代码被优化时,让编译器去优化代码是愚蠢的。在我看来,这很像给代客泊车额外付费,却拒绝交出钥匙。在这个特殊情况下,我不希望优化器运行。
使用稍微修改过的基准测试版本(以解决原始程序在每次循环时都没有在堆栈上分配一些东西的问题),并在不进行优化的情况下编译,但链接到发布库(以解决我们不希望包括任何由于链接到调试库而导致的放缓的问题):
#include <cstdio>
#include <chrono>
namespace {
void on_stack()
{
int i;
}
void on_heap()
{
int* i = new int;
delete i;
}
}
int main()
{
auto begin = std::chrono::system_clock::now();
for (int i = 0; i < 1000000000; ++i)
on_stack();
auto end = std::chrono::system_clock::now();
std::printf("on_stack took %f seconds\n", std::chrono::duration<double>(end - begin).count());
begin = std::chrono::system_clock::now();
for (int i = 0; i < 1000000000; ++i)
on_heap();
end = std::chrono::system_clock::now();
std::printf("on_heap took %f seconds\n", std::chrono::duration<double>(end - begin).count());
return 0;
}
显示:
on_stack took 2.070003 seconds
on_heap took 57.980081 seconds
在我的系统上,当用命令行编译cl foo。cc /Od /MT /EHsc。
您可能不同意我获得非优化构建的方法。这很好:您可以随意修改基准测试。当我打开优化,我得到:
on_stack took 0.000000 seconds
on_heap took 51.608723 seconds
这并不是因为堆栈分配实际上是瞬时的,而是因为任何半像样的编译器都能注意到on_stack没有做任何有用的事情,可以进行优化。我的Linux笔记本电脑上的GCC也注意到on_heap没有做任何有用的事情,并优化了它:
on_stack took 0.000003 seconds
on_heap took 0.000002 seconds