在这个网站上已经有很多性能问题了,但是在我看来,几乎所有的问题都是非常具体的,而且相当狭窄。几乎所有人都重复了避免过早优化的建议。

我们假设:

代码已经正常工作了 所选择的算法对于问题的环境已经是最优的 对代码进行了测量,并隔离了有问题的例程 所有优化的尝试也将被衡量,以确保它们不会使事情变得更糟

我在这里寻找的是策略和技巧,在一个关键算法中,当没有其他事情可做,但无论如何都要挤出最后百分之几。

理想情况下,尽量让答案与语言无关,并在适用的情况下指出所建议的策略的任何缺点。

我将添加一个带有我自己最初建议的回复,并期待Stack Overflow社区能想到的任何其他东西。


当前回答

When you get to the point that you're using efficient algorithms its a question of what you need more speed or memory. Use caching to "pay" in memory for more speed or use calculations to reduce the memory footprint. If possible (and more cost effective) throw hardware at the problem - faster CPU, more memory or HD could solve the problem faster then trying to code it. Use parallelization if possible - run part of the code on multiple threads. Use the right tool for the job. some programing languages create more efficient code, using managed code (i.e. Java/.NET) speed up development but native programing languages creates faster running code. Micro optimize. Only were applicable you can use optimized assembly to speed small pieces of code, using SSE/vector optimizations in the right places can greatly increase performance.

其他回答

目前最重要的限制因素是有限的内存带宽。多核只会让情况变得更糟,因为带宽是在核之间共享的。此外,用于实现缓存的有限芯片区域也分配给了内核和线程,这进一步恶化了这个问题。最后,保持不同缓存一致性所需的芯片间信号也会随着核数的增加而增加。这也增加了一个惩罚。

这些是您需要管理的影响。有时是通过对代码的微观管理,但有时是通过仔细考虑和重构。

很多注释已经提到了缓存友好的代码。至少有两种不同的风格:

避免内存读取延迟。 降低内存总线压力(带宽)。

第一个问题与如何使数据访问模式更规则有关,从而使硬件预取器更有效地工作。避免动态内存分配,这会将数据对象分散在内存中。使用线性容器代替链表、散列和树。

第二个问题与提高数据重用有关。修改算法以处理适合可用缓存的数据子集,并在数据仍在缓存中时尽可能多地重用这些数据。

更紧密地封装数据并确保在热循环中使用缓存线路中的所有数据,将有助于避免这些其他影响,并允许在缓存中安装更多有用的数据。

分而治之

如果正在处理的数据集太大,则对其中的大块进行循环。如果代码编写正确,实现应该很容易。如果您有一个单片程序,现在您就更清楚了。

我大半辈子都在这里度过。大致的方法是运行你的分析器并记录它:

Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls. Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing. Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing. Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall. Sequential floating-point ops. Make these SIMD.

我还喜欢做一件事:

将编译器设置为输出程序集清单,并查看它为代码中的热点函数发出了什么。所有那些聪明的优化,“一个好的编译器应该能够自动为你做”?实际的编译器可能不会执行这些操作。我见过GCC发出真正的WTF代码。

如果更好的硬件是一个选择,那么一定要去做。否则

Check you are using the best compiler and linker options. If hotspot routine in different library to frequent caller, consider moving or cloning it to the callers module. Eliminates some of the call overhead and may improve cache hits (cf how AIX links strcpy() statically into separately linked shared objects). This could of course decrease cache hits also, which is why one measure. See if there is any possibility of using a specialized version of the hotspot routine. Downside is more than one version to maintain. Look at the assembler. If you think it could be better, consider why the compiler did not figure this out, and how you could help the compiler. Consider: are you really using the best algorithm? Is it the best algorithm for your input size?

最后几个%是一个非常CPU和应用程序依赖的东西....

缓存架构不同,有些芯片有片上内存 你可以直接映射,ARM的(有时)有一个矢量 单位,SH4是一个有用的矩阵操作码。有GPU吗 也许一个着色器是可行的。TMS320非常 对循环中的分支敏感(因此分离循环和 如果可能的话,将条件移到室外)。

名单在....上但这类事情真的是 最后的手段……

编译x86,并运行Valgrind/Cachegrind对代码 进行适当的性能分析。或者德州仪器的 CCStudio有一个贴心的侧写器。然后你就知道在哪里了 关注……