在这个网站上已经有很多性能问题了,但是在我看来,几乎所有的问题都是非常具体的,而且相当狭窄。几乎所有人都重复了避免过早优化的建议。

我们假设:

代码已经正常工作了 所选择的算法对于问题的环境已经是最优的 对代码进行了测量,并隔离了有问题的例程 所有优化的尝试也将被衡量,以确保它们不会使事情变得更糟

我在这里寻找的是策略和技巧,在一个关键算法中,当没有其他事情可做,但无论如何都要挤出最后百分之几。

理想情况下,尽量让答案与语言无关,并在适用的情况下指出所建议的策略的任何缺点。

我将添加一个带有我自己最初建议的回复,并期待Stack Overflow社区能想到的任何其他东西。


当前回答

最后几个%是一个非常CPU和应用程序依赖的东西....

缓存架构不同,有些芯片有片上内存 你可以直接映射,ARM的(有时)有一个矢量 单位,SH4是一个有用的矩阵操作码。有GPU吗 也许一个着色器是可行的。TMS320非常 对循环中的分支敏感(因此分离循环和 如果可能的话,将条件移到室外)。

名单在....上但这类事情真的是 最后的手段……

编译x86,并运行Valgrind/Cachegrind对代码 进行适当的性能分析。或者德州仪器的 CCStudio有一个贴心的侧写器。然后你就知道在哪里了 关注……

其他回答

如果更好的硬件是一个选择,那么一定要去做。否则

Check you are using the best compiler and linker options. If hotspot routine in different library to frequent caller, consider moving or cloning it to the callers module. Eliminates some of the call overhead and may improve cache hits (cf how AIX links strcpy() statically into separately linked shared objects). This could of course decrease cache hits also, which is why one measure. See if there is any possibility of using a specialized version of the hotspot routine. Downside is more than one version to maintain. Look at the assembler. If you think it could be better, consider why the compiler did not figure this out, and how you could help the compiler. Consider: are you really using the best algorithm? Is it the best algorithm for your input size?

向它扔更多的硬件!

您可能应该考虑“谷歌视角”,即确定您的应用程序如何在很大程度上实现并行和并发,这也不可避免地意味着在某种程度上考虑将您的应用程序分布在不同的机器和网络上,这样它就可以理想地与您投入的硬件几乎线性扩展。

另一方面,谷歌人员也以投入大量人力和资源来解决他们正在使用的项目、工具和基础设施中的一些问题而闻名,例如,通过拥有一个专门的工程师团队来破解gcc内部,以便为Google典型的用例场景做好准备,从而对gcc进行整个程序优化。

类似地,分析应用程序不再仅仅意味着分析程序代码,还包括它周围的所有系统和基础设施(想想网络、交换机、服务器、RAID阵列),以便从系统的角度识别冗余和优化潜力。

首先,正如前面几个回答中提到的,了解是什么影响了您的性能——是内存、处理器、网络、数据库还是其他东西。这取决于…

...if it's memory - find one of the books written long time ago by Knuth, one of "The Art of Computer Programming" series. Most likely it's one about sorting and search - if my memory is wrong then you'll have to find out in which he talks about how to deal with slow tape data storage. Mentally transform his memory/tape pair into your pair of cache/main memory (or in pair of L1/L2 cache) respectively. Study all the tricks he describes - if you don's find something that solves your problem, then hire professional computer scientist to conduct a professional research. If your memory issue is by chance with FFT (cache misses at bit-reversed indexes when doing radix-2 butterflies) then don't hire a scientist - instead, manually optimize passes one-by-one until you're either win or get to dead end. You mentioned squeeze out up to the last few percent right? If it's few indeed you'll most likely win. ...if it's processor - switch to assembly language. Study processor specification - what takes ticks, VLIW, SIMD. Function calls are most likely replaceable tick-eaters. Learn loop transformations - pipeline, unroll. Multiplies and divisions might be replaceable / interpolated with bit shifts (multiplies by small integers might be replaceable with additions). Try tricks with shorter data - if you're lucky one instruction with 64 bits might turn out replaceable with two on 32 or even 4 on 16 or 8 on 8 bits go figure. Try also longer data - eg your float calculations might turn out slower than double ones at particular processor. If you have trigonometric stuff, fight it with pre-calculated tables; also keep in mind that sine of small value might be replaced with that value if loss of precision is within allowed limits. ...if it's network - think of compressing data you pass over it. Replace XML transfer with binary. Study protocols. Try UDP instead of TCP if you can somehow handle data loss. ...if it's database, well, go to any database forum and ask for advice. In-memory data-grid, optimizing query plan etc etc etc.

HTH:)

不可能有这样的全面陈述,这取决于问题领域。一些可能性:

因为你没有直接指定你的应用程序是100%计算:

搜索阻塞的调用(数据库,网络硬盘,显示更新),并隔离它们和/或将它们放入线程中。

如果你使用的数据库恰好是Microsoft SQL Server:

研究nolock和rowlock指令。(在这个论坛上有一些讨论。)

如果你的应用是纯粹的计算,你可以看看我关于旋转大图像缓存优化的问题。速度的提高使我大吃一惊。

这是一个长期的尝试,但它可能提供了一个想法,特别是如果您的问题是在成像领域:代码中旋转位图

另一个是尽量避免动态内存分配。一次分配多个结构,一次释放它们。

否则,请确定最紧密的循环,并将它们与一些数据结构一起张贴在这里(无论是伪的还是非的)。