在这个网站上已经有很多性能问题了,但是在我看来,几乎所有的问题都是非常具体的,而且相当狭窄。几乎所有人都重复了避免过早优化的建议。
我们假设:
代码已经正常工作了
所选择的算法对于问题的环境已经是最优的
对代码进行了测量,并隔离了有问题的例程
所有优化的尝试也将被衡量,以确保它们不会使事情变得更糟
我在这里寻找的是策略和技巧,在一个关键算法中,当没有其他事情可做,但无论如何都要挤出最后百分之几。
理想情况下,尽量让答案与语言无关,并在适用的情况下指出所建议的策略的任何缺点。
我将添加一个带有我自己最初建议的回复,并期待Stack Overflow社区能想到的任何其他东西。
我大半辈子都在这里度过。大致的方法是运行你的分析器并记录它:
Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
Sequential floating-point ops. Make these SIMD.
我还喜欢做一件事:
将编译器设置为输出程序集清单,并查看它为代码中的热点函数发出了什么。所有那些聪明的优化,“一个好的编译器应该能够自动为你做”?实际的编译器可能不会执行这些操作。我见过GCC发出真正的WTF代码。
由于许多性能问题都涉及数据库问题,因此在调优查询和存储过程时,我将介绍一些需要注意的具体问题。
避免在大多数数据库中使用游标。也要避免循环。大多数时候,数据访问应该基于设置,而不是逐条记录处理。这包括当您希望一次插入1,000,000条记录时,不要重用单个记录存储过程。
不要使用select *,只返回实际需要的字段。如果存在任何连接,则尤其如此,因为连接字段将重复,从而在服务器和网络上造成不必要的负载。
避免使用相关的子查询。使用连接(尽可能包括到派生表的连接)(我知道这对于Microsoft SQL Server是正确的,但是在使用不同的后端时测试建议)。
索引,索引,索引。如果适用于您的数据库,请更新这些统计数据。
使查询sargable。这意味着避免一些不可能使用索引的事情,例如在like子句的第一个字符中使用通配符,或在join中的函数中使用通配符,或作为where语句的左侧部分。
使用正确的数据类型。在日期字段上进行日期计算要比尝试将字符串数据类型转换为日期数据类型然后进行计算快得多。
永远不要在触发器中放入任何形式的循环!
大多数数据库都有一种方法来检查如何执行查询。在Microsoft SQL Server中,这被称为执行计划。先检查一下,看看问题出在哪里。
在确定需要优化的内容时,考虑查询运行的频率以及运行所需的时间。有时,对一个每天运行数百万次的查询稍作调整,可以获得比删除一个月只运行一次的long_running查询更多的性能。
使用某种分析器工具来找出发送到数据库和从数据库发送的内容。我记得过去有一次,我们不知道为什么页面加载这么慢,而存储过程却很快,并通过分析发现网页多次而不是一次地请求查询。
剖析器还将帮助您找到谁在阻止谁。一些单独运行时执行很快的查询可能会因为来自其他查询的锁而变得非常慢。
首先,正如前面几个回答中提到的,了解是什么影响了您的性能——是内存、处理器、网络、数据库还是其他东西。这取决于…
...if it's memory - find one of the books written long time ago by Knuth, one of "The Art of Computer Programming" series. Most likely it's one about sorting and search - if my memory is wrong then you'll have to find out in which he talks about how to deal with slow tape data storage. Mentally transform his memory/tape pair into your pair of cache/main memory (or in pair of L1/L2 cache) respectively. Study all the tricks he describes - if you don's find something that solves your problem, then hire professional computer scientist to conduct a professional research. If your memory issue is by chance with FFT (cache misses at bit-reversed indexes when doing radix-2 butterflies) then don't hire a scientist - instead, manually optimize passes one-by-one until you're either win or get to dead end. You mentioned squeeze out up to the last few percent right? If it's few indeed you'll most likely win.
...if it's processor - switch to assembly language. Study processor specification - what takes ticks, VLIW, SIMD. Function calls are most likely replaceable tick-eaters. Learn loop transformations - pipeline, unroll. Multiplies and divisions might be replaceable / interpolated with bit shifts (multiplies by small integers might be replaceable with additions). Try tricks with shorter data - if you're lucky one instruction with 64 bits might turn out replaceable with two on 32 or even 4 on 16 or 8 on 8 bits go figure. Try also longer data - eg your float calculations might turn out slower than double ones at particular processor. If you have trigonometric stuff, fight it with pre-calculated tables; also keep in mind that sine of small value might be replaced with that value if loss of precision is within allowed limits.
...if it's network - think of compressing data you pass over it. Replace XML transfer with binary. Study protocols. Try UDP instead of TCP if you can somehow handle data loss.
...if it's database, well, go to any database forum and ask for advice. In-memory data-grid, optimizing query plan etc etc etc.
HTH:)