在这个网站上已经有很多性能问题了,但是在我看来,几乎所有的问题都是非常具体的,而且相当狭窄。几乎所有人都重复了避免过早优化的建议。

我们假设:

代码已经正常工作了 所选择的算法对于问题的环境已经是最优的 对代码进行了测量,并隔离了有问题的例程 所有优化的尝试也将被衡量,以确保它们不会使事情变得更糟

我在这里寻找的是策略和技巧,在一个关键算法中,当没有其他事情可做,但无论如何都要挤出最后百分之几。

理想情况下,尽量让答案与语言无关,并在适用的情况下指出所建议的策略的任何缺点。

我将添加一个带有我自己最初建议的回复,并期待Stack Overflow社区能想到的任何其他东西。


向它扔更多的硬件!


很难对这个问题给出一般的答案。这实际上取决于你的问题领域和技术实现。一种与语言无关的通用技术:识别无法消除的代码热点,并手工优化汇编代码。


建议:

Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead. Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory. Don't use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it. Down-sides: writing additional code means more surface area for bugs. Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!) Cheat: in some cases although an exact calculation may exist for your problem, you may not need 'exact', sometimes an approximation may be 'good enough' and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%? Down-sides: Well... the answer won't be exact.


你在什么硬件上运行?您是否可以使用特定于平台化的优化(如向量化)? 你能找到更好的编译器吗?比如从GCC换成Intel? 你能让你的算法并行运行吗? 可以通过重新组织数据来减少缓存丢失吗? 可以禁用断言吗? 对编译器和平台进行微优化。在if/else语句中,把最常见的语句放在前面


内联例程(消除调用/返回和参数推送) 试着用表查找(如果它们更快的话)消除测试/开关 展开循环(Duff的设备)到刚好适合CPU缓存的位置 本地化内存访问,以免耗尽缓存 如果优化器还没有本地化相关的计算 如果优化器还没有这样做,就消除循环不变量


不好说。这取决于代码的样子。如果我们可以假设代码已经存在,那么我们可以简单地查看它并从中找出如何优化它。

更好的缓存位置,循环展开,尽量消除长依赖链,以获得更好的指令级并行性。尽可能选择有条件的移动而不是分支。尽可能利用SIMD指令。

理解你的代码在做什么,理解它运行在什么硬件上。然后,决定需要做什么来提高代码的性能就变得相当简单了。这是我能想到的唯一一个真正具有普遍性的建议。

好吧,还有“在SO上显示代码,并为特定的代码段寻求优化建议”。


如果更好的硬件是一个选择,那么一定要去做。否则

Check you are using the best compiler and linker options. If hotspot routine in different library to frequent caller, consider moving or cloning it to the callers module. Eliminates some of the call overhead and may improve cache hits (cf how AIX links strcpy() statically into separately linked shared objects). This could of course decrease cache hits also, which is why one measure. See if there is any possibility of using a specialized version of the hotspot routine. Downside is more than one version to maintain. Look at the assembler. If you think it could be better, consider why the compiler did not figure this out, and how you could help the compiler. Consider: are you really using the best algorithm? Is it the best algorithm for your input size?


OK, you're defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:

The first problem found was use of list clusters (now called "iterators" and "container classes") accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds. Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds. Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.

现在我似乎遇到了瓶颈。样本告诉我它到底在做什么,但我似乎找不到任何可以改进的地方。然后,我考虑了程序的基本设计及其事务驱动结构,并询问它所做的所有列表搜索实际上是否都是由问题的需求强制执行的。

然后我偶然发现了一种重新设计,在这种设计中,程序代码实际上是从一组较小的源代码中生成的(通过预处理器宏),在这种设计中,程序不会不断地找出程序员知道的相当可预测的事情。换句话说,不要“解释”要做的事情的顺序,要“编译”它。

重新设计完成了,源代码缩减了1 / 4,时间减少到10秒。

现在,因为它变得如此之快,很难进行抽样,所以我给它10倍的工作,但下面的时间是基于原始工作负载的。

进一步的诊断表明,它是在队列管理上花费时间的。内联这些将时间缩短到7秒。 现在一个很大的时间消耗是我一直在做的诊断打印。冲水- 4秒 现在最浪费时间的是调用malloc和free。回收对象- 2.6秒。 继续进行抽样,我仍然发现了严格意义上没有必要的操作——1.1秒。

总加速系数:43.6

Now no two programs are alike, but in non-toy software I've always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.

附注:可能有人想知道我为什么不用侧写器。答案是,几乎所有这些“问题”都是函数调用站点,堆栈样本可以精确定位。即使在今天,分析人员也只是勉强接受这样一个观点:语句和调用指令比整个函数更重要,更容易定位,也更容易修复。

我实际上构建了一个剖析器来做这件事,但是要真正了解代码正在做什么,没有什么可以替代您的手指。样本数量少并不是问题,因为被发现的问题没有一个小到容易被忽略的程度。

添加:jerryjvl要求一些例子。这是第一个问题。它由少量独立的代码行组成,加在一起占用了一半的时间:

 /* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)

These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with "information hiding" meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a "bottleneck" (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they "should have been avoided" (in hindsight). I hope that gives a bit of the flavor.

下面是第二个问题,分两行:

 /* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)

它们通过在列表的末尾附加项目来构建列表。(解决方法是将项目收集到数组中,并一次性构建列表。)有趣的是,这些语句只花费了原始时间的3/48(即在调用堆栈上),所以它们实际上在一开始并不是一个大问题。然而,在消除了第一个问题后,它们只花费了3/20的时间,所以现在是一条“大鱼”。总的来说,就是这样。

我可以补充说,这个项目是从我参与的一个真实项目中提炼出来的。在那个项目中,性能问题要严重得多(加速也是如此),比如在内部循环中调用数据库访问例程来查看任务是否完成。

参考补充道: 源代码,无论是原始的还是重新设计的,都可以在www.ddj.com上找到,1993年,文件9311.zip, files slug。Asc和slug.zip。

编辑2011/11/26: 现在有一个SourceForge项目包含了Visual c++中的源代码,以及它是如何调优的详细描述。它只经历了上述场景的前半部分,并不完全遵循相同的顺序,但仍然获得了2-3个数量级的加速。


由于许多性能问题都涉及数据库问题,因此在调优查询和存储过程时,我将介绍一些需要注意的具体问题。

避免在大多数数据库中使用游标。也要避免循环。大多数时候,数据访问应该基于设置,而不是逐条记录处理。这包括当您希望一次插入1,000,000条记录时,不要重用单个记录存储过程。

不要使用select *,只返回实际需要的字段。如果存在任何连接,则尤其如此,因为连接字段将重复,从而在服务器和网络上造成不必要的负载。

避免使用相关的子查询。使用连接(尽可能包括到派生表的连接)(我知道这对于Microsoft SQL Server是正确的,但是在使用不同的后端时测试建议)。

索引,索引,索引。如果适用于您的数据库,请更新这些统计数据。

使查询sargable。这意味着避免一些不可能使用索引的事情,例如在like子句的第一个字符中使用通配符,或在join中的函数中使用通配符,或作为where语句的左侧部分。

使用正确的数据类型。在日期字段上进行日期计算要比尝试将字符串数据类型转换为日期数据类型然后进行计算快得多。

永远不要在触发器中放入任何形式的循环!

大多数数据库都有一种方法来检查如何执行查询。在Microsoft SQL Server中,这被称为执行计划。先检查一下,看看问题出在哪里。

在确定需要优化的内容时,考虑查询运行的频率以及运行所需的时间。有时,对一个每天运行数百万次的查询稍作调整,可以获得比删除一个月只运行一次的long_running查询更多的性能。

使用某种分析器工具来找出发送到数据库和从数据库发送的内容。我记得过去有一次,我们不知道为什么页面加载这么慢,而存储过程却很快,并通过分析发现网页多次而不是一次地请求查询。

剖析器还将帮助您找到谁在阻止谁。一些单独运行时执行很快的查询可能会因为来自其他查询的锁而变得非常慢。


您可能应该考虑“谷歌视角”,即确定您的应用程序如何在很大程度上实现并行和并发,这也不可避免地意味着在某种程度上考虑将您的应用程序分布在不同的机器和网络上,这样它就可以理想地与您投入的硬件几乎线性扩展。

另一方面,谷歌人员也以投入大量人力和资源来解决他们正在使用的项目、工具和基础设施中的一些问题而闻名,例如,通过拥有一个专门的工程师团队来破解gcc内部,以便为Google典型的用例场景做好准备,从而对gcc进行整个程序优化。

类似地,分析应用程序不再仅仅意味着分析程序代码,还包括它周围的所有系统和基础设施(想想网络、交换机、服务器、RAID阵列),以便从系统的角度识别冗余和优化潜力。


我想这已经用不同的方式说过了。但是当你在处理一个处理器密集型算法时,你应该以牺牲其他所有东西为代价来简化最内部循环中的所有东西。

That may seem obvious to some, but it's something I try to focus on regardless of the language I'm working with. If you're dealing with nested loops, for example, and you find an opportunity to take some code down a level, you can in some cases drastically speed up your code. As another example, there are the little things to think about like working with integers instead of floating point variables whenever you can, and using multiplication instead of division whenever you can. Again, these are things that should be considered for your most inner loop.

有时,您可能会发现在内循环中对整数执行数学运算的好处,然后将其缩小为随后可以使用的浮点变量。这是一个牺牲一个部分的速度来提高另一个部分的速度的例子,但在某些情况下,这样做是值得的。


当你不能再提高表现时,看看你是否可以提高感知的表现。

您可能无法使您的fooCalc算法更快,但通常有一些方法可以使您的应用程序对用户的响应更快。

举几个例子:

预测用户将要做什么 请求并开始着手这项工作 在那之前 将结果显示为 它们是进来的,而不是同时出现的 在最后 精确的进度计

这些不会让你的程序更快,但可能会让你的用户对你的速度更满意。


我大半辈子都在这里度过。大致的方法是运行你的分析器并记录它:

Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls. Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing. Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing. Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall. Sequential floating-point ops. Make these SIMD.

我还喜欢做一件事:

将编译器设置为输出程序集清单,并查看它为代码中的热点函数发出了什么。所有那些聪明的优化,“一个好的编译器应该能够自动为你做”?实际的编译器可能不会执行这些操作。我见过GCC发出真正的WTF代码。


有时改变数据的布局会有所帮助。在C语言中,可以从数组或结构切换到数组结构,反之亦然。


不可能有这样的全面陈述,这取决于问题领域。一些可能性:

因为你没有直接指定你的应用程序是100%计算:

搜索阻塞的调用(数据库,网络硬盘,显示更新),并隔离它们和/或将它们放入线程中。

如果你使用的数据库恰好是Microsoft SQL Server:

研究nolock和rowlock指令。(在这个论坛上有一些讨论。)

如果你的应用是纯粹的计算,你可以看看我关于旋转大图像缓存优化的问题。速度的提高使我大吃一惊。

这是一个长期的尝试,但它可能提供了一个想法,特别是如果您的问题是在成像领域:代码中旋转位图

另一个是尽量避免动态内存分配。一次分配多个结构,一次释放它们。

否则,请确定最紧密的循环,并将它们与一些数据结构一起张贴在这里(无论是伪的还是非的)。


When you get to the point that you're using efficient algorithms its a question of what you need more speed or memory. Use caching to "pay" in memory for more speed or use calculations to reduce the memory footprint. If possible (and more cost effective) throw hardware at the problem - faster CPU, more memory or HD could solve the problem faster then trying to code it. Use parallelization if possible - run part of the code on multiple threads. Use the right tool for the job. some programing languages create more efficient code, using managed code (i.e. Java/.NET) speed up development but native programing languages creates faster running code. Micro optimize. Only were applicable you can use optimized assembly to speed small pieces of code, using SSE/vector optimizations in the right places can greatly increase performance.


更多的建议:

Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need. Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking). Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done. Threaded I/O: For those daring enough, combine 'I/O up-front' or 'Delay I/O' with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.


在带有模板的语言(c++ /D)中,您可以尝试通过模板参数传播常量值。你甚至可以用开关来处理小的非常值集合。

Foo(i, j); // i always in 0-4.

就变成了

switch(i)
{
    case 0: Foo<0>(j); break;
    case 1: Foo<1>(j); break;
    case 2: Foo<2>(j); break;
    case 3: Foo<3>(j); break;
    case 4: Foo<4>(j); break;
}

缺点是缓存压力,因此这只会在深度或长期运行的调用树中获得,其中值在持续时间内是恒定的。


目前最重要的限制因素是有限的内存带宽。多核只会让情况变得更糟,因为带宽是在核之间共享的。此外,用于实现缓存的有限芯片区域也分配给了内核和线程,这进一步恶化了这个问题。最后,保持不同缓存一致性所需的芯片间信号也会随着核数的增加而增加。这也增加了一个惩罚。

这些是您需要管理的影响。有时是通过对代码的微观管理,但有时是通过仔细考虑和重构。

很多注释已经提到了缓存友好的代码。至少有两种不同的风格:

避免内存读取延迟。 降低内存总线压力(带宽)。

第一个问题与如何使数据访问模式更规则有关,从而使硬件预取器更有效地工作。避免动态内存分配,这会将数据对象分散在内存中。使用线性容器代替链表、散列和树。

第二个问题与提高数据重用有关。修改算法以处理适合可用缓存的数据子集,并在数据仍在缓存中时尽可能多地重用这些数据。

更紧密地封装数据并确保在热循环中使用缓存线路中的所有数据,将有助于避免这些其他影响,并允许在缓存中安装更多有用的数据。


最后几个%是一个非常CPU和应用程序依赖的东西....

缓存架构不同,有些芯片有片上内存 你可以直接映射,ARM的(有时)有一个矢量 单位,SH4是一个有用的矩阵操作码。有GPU吗 也许一个着色器是可行的。TMS320非常 对循环中的分支敏感(因此分离循环和 如果可能的话,将条件移到室外)。

名单在....上但这类事情真的是 最后的手段……

编译x86,并运行Valgrind/Cachegrind对代码 进行适当的性能分析。或者德州仪器的 CCStudio有一个贴心的侧写器。然后你就知道在哪里了 关注……


缓存!要使几乎任何事情都变得更快,一个便宜的方法(在程序员的努力中)是在程序的任何数据移动区域添加缓存抽象层。无论是I/O还是只是传递/创建对象或结构。通常,向工厂类和读取器/写入器添加缓存是很容易的。

有时缓存不会给你带来太多好处,但这是一种简单的方法,只需添加缓存,然后在没有帮助的地方禁用它。我经常发现这样做可以获得巨大的性能,而无需对代码进行微观分析。


调整操作系统和框架。

这听起来可能有点夸张,但可以这样想:操作系统和框架被设计用来做很多事情。您的应用程序只做非常具体的事情。如果你能让操作系统完全满足你的应用程序的需求,并让你的应用程序理解框架(php,.net,java)是如何工作的,你就能从硬件上得到更好的东西。

例如,Facebook改变了Linux中的一些内核级别的东西,改变了memcached的工作方式(例如,他们写了一个memcached代理,使用udp而不是tcp)。

另一个例子是Window2008。Win2K8有一个版本,你可以安装运行X应用程序所需的基本操作系统(例如web应用程序,服务器应用程序)。这大大减少了操作系统在运行进程方面的开销,并为您提供了更好的性能。

当然,你应该在第一步就投入更多的硬件……


谷歌方法是一个选项“缓存它..”只要可能,不要碰磁盘。”


分而治之

如果正在处理的数据集太大,则对其中的大块进行循环。如果代码编写正确,实现应该很容易。如果您有一个单片程序,现在您就更清楚了。


虽然我喜欢Mike Dunlavey的回答,但事实上这是一个很好的答案,并且有支持的例子,我认为它可以简单地表达出来:

首先找出哪些事情最耗费时间,并了解原因。

它是时间消耗的识别过程,可以帮助您了解必须在哪里改进算法。这是我能找到的唯一一个全面的语言不可知论答案,这个问题已经被认为是完全优化的。同时假设您希望在追求速度的过程中独立于体系结构。

因此,虽然算法可能被优化了,但它的实现可能没有。标识可以让您知道哪个部分是哪个部分:算法或实现。所以,占用时间最多的就是你审查的首选对象。但是既然你说你想把最后的%挤出来,你可能还想检查一下较小的部分,那些你一开始没有仔细检查过的部分。

最后,对实现相同解决方案的不同方法的性能数据进行一些尝试和错误,或者可能的不同算法,可以带来有助于识别浪费时间和节省时间的见解。

HPH, asoudmove。


你知道吗,一根CAT6电缆能够比缺省的Cat5e UTP电缆更好地屏蔽外部干扰10倍?

对于任何非离线项目,尽管拥有最好的软件和硬件,但如果你的throughoutput很弱,那么这条细线就会挤压数据并给你带来延迟,尽管只有几毫秒……

此外,CAT6电缆的最大吞吐量更高,因为您实际上更有可能收到铜芯电缆,而不是CCA,铜芯包覆铝,这通常出现在所有标准CAT5e电缆中。

如果您面临丢包,丢包,那么提高24/7操作的吞吐量可靠性可以使您所寻找的不同。

对于那些追求家庭/办公室连接可靠性的人来说(并且愿意对今年的快餐店说不,在年底你可以在那里),以知名品牌的CAT7电缆的形式为自己提供LAN连接的顶峰。


通过引用而不是通过值传递


减少可变大小(在嵌入式系统中)

如果您的变量大小大于特定体系结构上的单词大小,则会对代码大小和速度产生重大影响。例如,如果你有一个16位系统,经常使用一个长int变量,然后意识到它永远不能超出范围(−32.768…32.767)考虑将其减少到短int。

从我的个人经验来看,如果一个程序已经准备好或几乎准备好了,但是我们意识到它占用了目标硬件程序内存的110%或120%,那么对变量进行快速归一化通常可以解决这个问题。

到这个时候,优化算法或部分代码本身可能会变得令人沮丧的徒劳:

重新组织整个结构,程序就不再像预期的那样工作,或者至少引入了许多错误。 做一些聪明的技巧:通常你花了很多时间优化一些东西,并发现代码大小没有或很小的减少,因为编译器无论如何都会优化它。

Many people make the mistake of having variables which exactly store the numerical value of a unit they use the variable for: for example, their variable time stores the exact number of milliseconds, even if only time steps of say 50 ms are relevant. Maybe if your variable represented 50 ms for each increment of one, you would be able to fit into a variable smaller or equal to the word size. On an 8 bit system, for example, even a simple addition of two 32-bit variables generates a fair amount of code, especially if you are low on registers, while 8 bit additions are both small and fast.


以下是我使用的一些快速而粗糙的优化技术。我认为这是“第一关”优化。

了解时间都花在了什么地方。是文件IO吗?是CPU时间吗?是因为网络吗?是数据库吗?如果IO不是瓶颈,优化IO是没有用的。

了解您的环境了解在哪里进行优化通常取决于开发环境。例如,在VB6中,通过引用传递比通过值传递慢,但是在C和c++中,通过引用传递要快得多。在C语言中,如果返回代码表明失败,尝试一些东西并做一些不同的事情是合理的,而在Dot Net中,捕获异常比尝试前检查有效条件要慢得多。

在频繁查询的数据库字段上构建索引。你几乎总是可以用空间来换取速度。

在要优化的循环内部,我避免了必须进行任何查找。找到循环外的偏移量和/或索引,并重用循环内的数据。

最小化IO尝试以一种减少必须读或写的次数的方式进行设计,特别是在网络连接上

减少抽象代码必须通过的抽象层越多,它就越慢。在关键循环内部,减少抽象(例如,揭示避免额外代码的低级方法)

对于带有用户界面的项目,生成一个新线程来执行较慢的任务使应用程序感觉反应更快,尽管不是。

你通常可以用空间来换取速度。如果有计算或其他密集的操作,看看是否可以在进入关键循环之前预先计算一些信息。


首先,正如前面几个回答中提到的,了解是什么影响了您的性能——是内存、处理器、网络、数据库还是其他东西。这取决于…

...if it's memory - find one of the books written long time ago by Knuth, one of "The Art of Computer Programming" series. Most likely it's one about sorting and search - if my memory is wrong then you'll have to find out in which he talks about how to deal with slow tape data storage. Mentally transform his memory/tape pair into your pair of cache/main memory (or in pair of L1/L2 cache) respectively. Study all the tricks he describes - if you don's find something that solves your problem, then hire professional computer scientist to conduct a professional research. If your memory issue is by chance with FFT (cache misses at bit-reversed indexes when doing radix-2 butterflies) then don't hire a scientist - instead, manually optimize passes one-by-one until you're either win or get to dead end. You mentioned squeeze out up to the last few percent right? If it's few indeed you'll most likely win. ...if it's processor - switch to assembly language. Study processor specification - what takes ticks, VLIW, SIMD. Function calls are most likely replaceable tick-eaters. Learn loop transformations - pipeline, unroll. Multiplies and divisions might be replaceable / interpolated with bit shifts (multiplies by small integers might be replaceable with additions). Try tricks with shorter data - if you're lucky one instruction with 64 bits might turn out replaceable with two on 32 or even 4 on 16 or 8 on 8 bits go figure. Try also longer data - eg your float calculations might turn out slower than double ones at particular processor. If you have trigonometric stuff, fight it with pre-calculated tables; also keep in mind that sine of small value might be replaced with that value if loss of precision is within allowed limits. ...if it's network - think of compressing data you pass over it. Replace XML transfer with binary. Study protocols. Try UDP instead of TCP if you can somehow handle data loss. ...if it's database, well, go to any database forum and ask for advice. In-memory data-grid, optimizing query plan etc etc etc.

HTH:)


我花了一些时间优化在低带宽和长延迟网络(例如卫星、远程、离岸)上运行的客户端/服务器业务系统,并能够通过相当可重复的过程实现一些显著的性能改进。

Measure: Start by understanding the network's underlying capacity and topology. Talking to the relevant networking people in the business, and make use of basic tools such as ping and traceroute to establish (at a minimum) the network latency from each client location, during typical operational periods. Next, take accurate time measurements of specific end user functions that display the problematic symptoms. Record all of these measurements, along with their locations, dates and times. Consider building end-user "network performance testing" functionality into your client application, allowing your power users to participate in the process of improvement; empowering them like this can have a huge psychological impact when you're dealing with users frustrated by a poorly performing system. Analyze: Using any and all logging methods available to establish exactly what data is being transmitted and received during the execution of the affected operations. Ideally, your application can capture data transmitted and received by both the client and the server. If these include timestamps as well, even better. If sufficient logging isn't available (e.g. closed system, or inability to deploy modifications into a production environment), use a network sniffer and make sure you really understand what's going on at the network level. Cache: Look for cases where static or infrequently changed data is being transmitted repetitively and consider an appropriate caching strategy. Typical examples include "pick list" values or other "reference entities", which can be surprisingly large in some business applications. In many cases, users can accept that they must restart or refresh the application to update infrequently updated data, especially if it can shave significant time from the display of commonly used user interface elements. Make sure you understand the real behaviour of the caching elements already deployed - many common caching methods (e.g. HTTP ETag) still require a network round-trip to ensure consistency, and where network latency is expensive, you may be able to avoid it altogether with a different caching approach. Parallelise: Look for sequential transactions that don't logically need to be issued strictly sequentially, and rework the system to issue them in parallel. I dealt with one case where an end-to-end request had an inherent network delay of ~2s, which was not a problem for a single transaction, but when 6 sequential 2s round trips were required before the user regained control of the client application, it became a huge source of frustration. Discovering that these transactions were in fact independent allowed them to be executed in parallel, reducing the end-user delay to very close to the cost of a single round trip. Combine: Where sequential requests must be executed sequentially, look for opportunities to combine them into a single more comprehensive request. Typical examples include creation of new entities, followed by requests to relate those entities to other existing entities. Compress: Look for opportunities to leverage compression of the payload, either by replacing a textual form with a binary one, or using actual compression technology. Many modern (i.e. within a decade) technology stacks support this almost transparently, so make sure it's configured. I have often been surprised by the significant impact of compression where it seemed clear that the problem was fundamentally latency rather than bandwidth, discovering after the fact that it allowed the transaction to fit within a single packet or otherwise avoid packet loss and therefore have an outsize impact on performance. Repeat: Go back to the beginning and re-measure your operations (at the same locations and times) with the improvements in place, record and report your results. As with all optimisation, some problems may have been solved exposing others that now dominate.

In the steps above, I focus on the application related optimisation process, but of course you must ensure the underlying network itself is configured in the most efficient manner to support your application too. Engage the networking specialists in the business and determine if they're able to apply capacity improvements, QoS, network compression, or other techniques to address the problem. Usually, they will not understand your application's needs, so it's important that you're equipped (after the Analyse step) to discuss it with them, and also to make the business case for any costs you're going to be asking them to incur. I've encountered cases where erroneous network configuration caused the applications data to be transmitted over a slow satellite link rather than an overland link, simply because it was using a TCP port that was not "well known" by the networking specialists; obviously rectifying a problem like this can have a dramatic impact on performance, with no software code or configuration changes necessary at all.


不像之前的答案那么深入或复杂,但下面是: (这些更多是初级/中级水平)

明显:干 向后运行循环,所以总是与0比较,而不是与变量比较 尽可能使用位操作符 将重复的代码分解为模块/函数 缓存对象 局部变量具有轻微的性能优势 尽可能限制字符串操作


如果你有很多高度并行的浮点运算——尤其是单精度运算——尝试使用OpenCL或(对于NVidia芯片)CUDA将其卸载到图形处理器上(如果有的话)。gpu在着色器中拥有强大的浮点计算能力,这比CPU要大得多。


添加这个答案,因为我没有看到它包括在所有其他。

最小化类型和符号之间的隐式转换:

这至少适用于C/ c++,即使你已经认为你已经摆脱了转换——有时测试在需要性能的函数周围添加编译器警告是很好的,特别是注意循环中的转换。

特定于GCC:您可以通过在代码周围添加一些冗长的pragmas来测试这一点,

#ifdef __GNUC__
#  pragma GCC diagnostic push
#  pragma GCC diagnostic error "-Wsign-conversion"
#  pragma GCC diagnostic error "-Wdouble-promotion"
#  pragma GCC diagnostic error "-Wsign-compare"
#  pragma GCC diagnostic error "-Wconversion"
#endif

/* your code */

#ifdef __GNUC__
#  pragma GCC diagnostic pop
#endif

我曾见过一些案例,你可以通过减少这样的警告所带来的转化率来获得几个百分点的加速。

在某些情况下,我有一个带有严格警告的头,我保留了这些警告,以防止意外转换,然而这是一种权衡,因为您可能最终会为安静的故意转换添加大量强制转换,这可能会使代码更加混乱,而收益却微乎其微。