在这个网站上已经有很多性能问题了,但是在我看来,几乎所有的问题都是非常具体的,而且相当狭窄。几乎所有人都重复了避免过早优化的建议。

我们假设:

代码已经正常工作了 所选择的算法对于问题的环境已经是最优的 对代码进行了测量,并隔离了有问题的例程 所有优化的尝试也将被衡量,以确保它们不会使事情变得更糟

我在这里寻找的是策略和技巧,在一个关键算法中,当没有其他事情可做,但无论如何都要挤出最后百分之几。

理想情况下,尽量让答案与语言无关,并在适用的情况下指出所建议的策略的任何缺点。

我将添加一个带有我自己最初建议的回复,并期待Stack Overflow社区能想到的任何其他东西。


当前回答

我花了一些时间优化在低带宽和长延迟网络(例如卫星、远程、离岸)上运行的客户端/服务器业务系统,并能够通过相当可重复的过程实现一些显著的性能改进。

Measure: Start by understanding the network's underlying capacity and topology. Talking to the relevant networking people in the business, and make use of basic tools such as ping and traceroute to establish (at a minimum) the network latency from each client location, during typical operational periods. Next, take accurate time measurements of specific end user functions that display the problematic symptoms. Record all of these measurements, along with their locations, dates and times. Consider building end-user "network performance testing" functionality into your client application, allowing your power users to participate in the process of improvement; empowering them like this can have a huge psychological impact when you're dealing with users frustrated by a poorly performing system. Analyze: Using any and all logging methods available to establish exactly what data is being transmitted and received during the execution of the affected operations. Ideally, your application can capture data transmitted and received by both the client and the server. If these include timestamps as well, even better. If sufficient logging isn't available (e.g. closed system, or inability to deploy modifications into a production environment), use a network sniffer and make sure you really understand what's going on at the network level. Cache: Look for cases where static or infrequently changed data is being transmitted repetitively and consider an appropriate caching strategy. Typical examples include "pick list" values or other "reference entities", which can be surprisingly large in some business applications. In many cases, users can accept that they must restart or refresh the application to update infrequently updated data, especially if it can shave significant time from the display of commonly used user interface elements. Make sure you understand the real behaviour of the caching elements already deployed - many common caching methods (e.g. HTTP ETag) still require a network round-trip to ensure consistency, and where network latency is expensive, you may be able to avoid it altogether with a different caching approach. Parallelise: Look for sequential transactions that don't logically need to be issued strictly sequentially, and rework the system to issue them in parallel. I dealt with one case where an end-to-end request had an inherent network delay of ~2s, which was not a problem for a single transaction, but when 6 sequential 2s round trips were required before the user regained control of the client application, it became a huge source of frustration. Discovering that these transactions were in fact independent allowed them to be executed in parallel, reducing the end-user delay to very close to the cost of a single round trip. Combine: Where sequential requests must be executed sequentially, look for opportunities to combine them into a single more comprehensive request. Typical examples include creation of new entities, followed by requests to relate those entities to other existing entities. Compress: Look for opportunities to leverage compression of the payload, either by replacing a textual form with a binary one, or using actual compression technology. Many modern (i.e. within a decade) technology stacks support this almost transparently, so make sure it's configured. I have often been surprised by the significant impact of compression where it seemed clear that the problem was fundamentally latency rather than bandwidth, discovering after the fact that it allowed the transaction to fit within a single packet or otherwise avoid packet loss and therefore have an outsize impact on performance. Repeat: Go back to the beginning and re-measure your operations (at the same locations and times) with the improvements in place, record and report your results. As with all optimisation, some problems may have been solved exposing others that now dominate.

In the steps above, I focus on the application related optimisation process, but of course you must ensure the underlying network itself is configured in the most efficient manner to support your application too. Engage the networking specialists in the business and determine if they're able to apply capacity improvements, QoS, network compression, or other techniques to address the problem. Usually, they will not understand your application's needs, so it's important that you're equipped (after the Analyse step) to discuss it with them, and also to make the business case for any costs you're going to be asking them to incur. I've encountered cases where erroneous network configuration caused the applications data to be transmitted over a slow satellite link rather than an overland link, simply because it was using a TCP port that was not "well known" by the networking specialists; obviously rectifying a problem like this can have a dramatic impact on performance, with no software code or configuration changes necessary at all.

其他回答

你知道吗,一根CAT6电缆能够比缺省的Cat5e UTP电缆更好地屏蔽外部干扰10倍?

对于任何非离线项目,尽管拥有最好的软件和硬件,但如果你的throughoutput很弱,那么这条细线就会挤压数据并给你带来延迟,尽管只有几毫秒……

此外,CAT6电缆的最大吞吐量更高,因为您实际上更有可能收到铜芯电缆,而不是CCA,铜芯包覆铝,这通常出现在所有标准CAT5e电缆中。

如果您面临丢包,丢包,那么提高24/7操作的吞吐量可靠性可以使您所寻找的不同。

对于那些追求家庭/办公室连接可靠性的人来说(并且愿意对今年的快餐店说不,在年底你可以在那里),以知名品牌的CAT7电缆的形式为自己提供LAN连接的顶峰。

调整操作系统和框架。

这听起来可能有点夸张,但可以这样想:操作系统和框架被设计用来做很多事情。您的应用程序只做非常具体的事情。如果你能让操作系统完全满足你的应用程序的需求,并让你的应用程序理解框架(php,.net,java)是如何工作的,你就能从硬件上得到更好的东西。

例如,Facebook改变了Linux中的一些内核级别的东西,改变了memcached的工作方式(例如,他们写了一个memcached代理,使用udp而不是tcp)。

另一个例子是Window2008。Win2K8有一个版本,你可以安装运行X应用程序所需的基本操作系统(例如web应用程序,服务器应用程序)。这大大减少了操作系统在运行进程方面的开销,并为您提供了更好的性能。

当然,你应该在第一步就投入更多的硬件……

虽然我喜欢Mike Dunlavey的回答,但事实上这是一个很好的答案,并且有支持的例子,我认为它可以简单地表达出来:

首先找出哪些事情最耗费时间,并了解原因。

它是时间消耗的识别过程,可以帮助您了解必须在哪里改进算法。这是我能找到的唯一一个全面的语言不可知论答案,这个问题已经被认为是完全优化的。同时假设您希望在追求速度的过程中独立于体系结构。

因此,虽然算法可能被优化了,但它的实现可能没有。标识可以让您知道哪个部分是哪个部分:算法或实现。所以,占用时间最多的就是你审查的首选对象。但是既然你说你想把最后的%挤出来,你可能还想检查一下较小的部分,那些你一开始没有仔细检查过的部分。

最后,对实现相同解决方案的不同方法的性能数据进行一些尝试和错误,或者可能的不同算法,可以带来有助于识别浪费时间和节省时间的见解。

HPH, asoudmove。

我大半辈子都在这里度过。大致的方法是运行你的分析器并记录它:

Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls. Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing. Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing. Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall. Sequential floating-point ops. Make these SIMD.

我还喜欢做一件事:

将编译器设置为输出程序集清单,并查看它为代码中的热点函数发出了什么。所有那些聪明的优化,“一个好的编译器应该能够自动为你做”?实际的编译器可能不会执行这些操作。我见过GCC发出真正的WTF代码。

以下是我使用的一些快速而粗糙的优化技术。我认为这是“第一关”优化。

了解时间都花在了什么地方。是文件IO吗?是CPU时间吗?是因为网络吗?是数据库吗?如果IO不是瓶颈,优化IO是没有用的。

了解您的环境了解在哪里进行优化通常取决于开发环境。例如,在VB6中,通过引用传递比通过值传递慢,但是在C和c++中,通过引用传递要快得多。在C语言中,如果返回代码表明失败,尝试一些东西并做一些不同的事情是合理的,而在Dot Net中,捕获异常比尝试前检查有效条件要慢得多。

在频繁查询的数据库字段上构建索引。你几乎总是可以用空间来换取速度。

在要优化的循环内部,我避免了必须进行任何查找。找到循环外的偏移量和/或索引,并重用循环内的数据。

最小化IO尝试以一种减少必须读或写的次数的方式进行设计,特别是在网络连接上

减少抽象代码必须通过的抽象层越多,它就越慢。在关键循环内部,减少抽象(例如,揭示避免额外代码的低级方法)

对于带有用户界面的项目,生成一个新线程来执行较慢的任务使应用程序感觉反应更快,尽管不是。

你通常可以用空间来换取速度。如果有计算或其他密集的操作,看看是否可以在进入关键循环之前预先计算一些信息。