了解汇编程序的原因之一是,有时可以使用汇编程序来编写比用高级语言(特别是C语言)编写的代码性能更好的代码。然而,我也听人说过很多次,尽管这并非完全错误,但实际上可以使用汇编程序来生成性能更好的代码的情况极其罕见,并且需要汇编方面的专业知识和经验。

这个问题甚至没有涉及到这样一个事实,即汇编程序指令将是特定于机器的、不可移植的,或者汇编程序的任何其他方面。当然,除了这一点之外,了解汇编还有很多很好的理由,但这是一个需要示例和数据的具体问题,而不是关于汇编程序与高级语言的扩展论述。

谁能提供一些具体的例子,说明使用现代编译器汇编代码比编写良好的C代码更快,并且您能否用分析证据支持这一说法?我相信这些案例确实存在,但我真的很想知道这些案例到底有多深奥,因为这似乎是一个有争议的问题。


当前回答

不需要给出任何具体的示例或分析器证据,当您比编译器知道的更多时,您可以编写比编译器更好的汇编程序。

In the general case, a modern C compiler knows much more about how to optimize the code in question: it knows how the processor pipeline works, it can try to reorder instructions quicker than a human can, and so on - it's basically the same as a computer being as good as or better than the best human player for boardgames, etc. simply because it can make searches within the problem space faster than most humans. Although you theoretically can perform as well as the computer in a specific case, you certainly can't do it at the same speed, making it infeasible for more than a few cases (i.e. the compiler will most certainly outperform you if you try to write more than a few routines in assembler).

另一方面,有些情况下编译器没有那么多的信息——我想说主要是在使用不同形式的外部硬件时,编译器不知道这些信息。主要的例子可能是设备驱动程序,其中汇编程序结合人类对相关硬件的熟悉知识可以产生比C编译器更好的结果。

其他人提到了特殊用途指令,这就是我在上面一段中所说的——编译器可能对这些指令了解有限或根本不了解,这使得人类可以编写更快的代码。

其他回答

尽管C语言“接近”于对8位、16位、32位和64位数据的低级操作,但仍有一些C语言不支持的数学操作通常可以在某些汇编指令集中优雅地执行:

Fixed-point multiplication: The product of two 16-bit numbers is a 32-bit number. But the rules in C says that the product of two 16-bit numbers is a 16-bit number, and the product of two 32-bit numbers is a 32-bit number -- the bottom half in both cases. If you want the top half of a 16x16 multiply or a 32x32 multiply, you have to play games with the compiler. The general method is to cast to a larger-than-necessary bit width, multiply, shift down, and cast back: int16_t x, y; // int16_t is a typedef for "short" // set x and y to something int16_t prod = (int16_t)(((int32_t)x*y)>>16);` In this case the compiler may be smart enough to know that you're really just trying to get the top half of a 16x16 multiply and do the right thing with the machine's native 16x16multiply. Or it may be stupid and require a library call to do the 32x32 multiply that's way overkill because you only need 16 bits of the product -- but the C standard doesn't give you any way to express yourself. Certain bitshifting operations (rotation/carries): // 256-bit array shifted right in its entirety: uint8_t x[32]; for (int i = 32; --i > 0; ) { x[i] = (x[i] >> 1) | (x[i-1] << 7); } x[0] >>= 1; This is not too inelegant in C, but again, unless the compiler is smart enough to realize what you are doing, it's going to do a lot of "unnecessary" work. Many assembly instruction sets allow you to rotate or shift left/right with the result in the carry register, so you could accomplish the above in 34 instructions: load a pointer to the beginning of the array, clear the carry, and perform 32 8-bit right-shifts, using auto-increment on the pointer. For another example, there are linear feedback shift registers (LFSR) that are elegantly performed in assembly: Take a chunk of N bits (8, 16, 32, 64, 128, etc), shift the whole thing right by 1 (see above algorithm), then if the resulting carry is 1 then you XOR in a bit pattern that represents the polynomial.

尽管如此,除非有严重的性能限制,否则我不会求助于这些技术。正如其他人所说,汇编代码比C代码更难记录/调试/测试/维护:性能的提高伴随着一些严重的代价。

编辑:3。溢出检测在汇编中是可能的(在C中不能真正做到),这使得一些算法更容易。

The question is a bit misleading. The answer is there in your post itself. It is always possible to write assembly solution for a particular problem which executes faster than any generated by a compiler. The thing is you need to be an expert in assembly to overcome the limitations of a compiler. An experienced assembly programmer can write programs in any HLL which performs faster than one written by an inexperienced. The truth is you can always write assembly programs executing faster than one generated by a compiler.

以下是我个人经历中的几个例子:

Access to instructions that are not accessible from C. For instance, many architectures (like x86-64, IA-64, DEC Alpha, and 64-bit MIPS or PowerPC) support a 64 bit by 64 bit multiplication producing a 128 bit result. GCC recently added an extension providing access to such instructions, but before that assembly was required. And access to this instruction can make a huge difference on 64-bit CPUs when implementing something like RSA - sometimes as much as a factor of 4 improvement in performance. Access to CPU-specific flags. The one that has bitten me a lot is the carry flag; when doing a multiple-precision addition, if you don't have access to the CPU carry bit one must instead compare the result to see if it overflowed, which takes 3-5 more instructions per limb; and worse, which are quite serial in terms of data accesses, which kills performance on modern superscalar processors. When processing thousands of such integers in a row, being able to use addc is a huge win (there are superscalar issues with contention on the carry bit as well, but modern CPUs deal pretty well with it). SIMD. Even autovectorizing compilers can only do relatively simple cases, so if you want good SIMD performance it's unfortunately often necessary to write the code directly. Of course you can use intrinsics instead of assembly but once you're at the intrinsics level you're basically writing assembly anyway, just using the compiler as a register allocator and (nominally) instruction scheduler. (I tend to use intrinsics for SIMD simply because the compiler can generate the function prologues and whatnot for me so I can use the same code on Linux, OS X, and Windows without having to deal with ABI issues like function calling conventions, but other than that the SSE intrinsics really aren't very nice - the Altivec ones seem better though I don't have much experience with them). As examples of things a (current day) vectorizing compiler can't figure out, read about bitslicing AES or SIMD error correction - one could imagine a compiler that could analyze algorithms and generate such code, but it feels to me like such a smart compiler is at least 30 years away from existing (at best).

On the other hand, multicore machines and distributed systems have shifted many of the biggest performance wins in the other direction - get an extra 20% speedup writing your inner loops in assembly, or 300% by running them across multiple cores, or 10000% by running them across a cluster of machines. And of course high level optimizations (things like futures, memoization, etc) are often much easier to do in a higher level language like ML or Scala than C or asm, and often can provide a much bigger performance win. So, as always, there are tradeoffs to be made.

我不能给出具体的例子,因为那是很多年前的事情了,但是在很多情况下,手工编写的汇编程序可以胜过任何编译器。原因:

您可以偏离调用约定,在寄存器中传递参数。 您可以仔细考虑如何使用寄存器,避免将变量存储在内存中。 对于跳转表之类的东西,可以避免检查索引的边界。

基本上,编译器在优化方面做得很好,这几乎总是“足够好”,但在某些情况下(如图形渲染),你要为每一个周期付出高昂的代价,你可以走捷径,因为你知道代码,而编译器不能,因为它必须在安全的方面。

事实上,我听说过一些图形渲染代码,其中一个例程,如直线绘制或多边形填充例程,实际上在堆栈上生成了一小块机器代码并在那里执行,以避免关于线条样式、宽度、模式等的连续决策。

也就是说,我想让编译器为我生成好的汇编代码,但又不太聪明,它们通常都是这样做的。事实上,我讨厌Fortran的一个原因是它为了“优化”而打乱代码,通常没有什么重要的目的。

通常,当应用程序出现性能问题时,都是由于浪费的设计造成的。这些天,我永远不会推荐汇编程序的性能,除非整个应用程序已经在它的生命周期内进行了调优,仍然不够快,并且把所有的时间都花在了紧凑的内部循环中。

补充:我见过很多用汇编语言编写的应用程序,与C、Pascal、Fortran等语言相比,汇编语言的主要速度优势是因为程序员在用汇编语言编码时要谨慎得多。他或她每天要写大约100行代码,不管哪种语言,在编译器语言中,这将等于3或400条指令。

答案很简单……一个非常了解汇编的人(也就是他身边有参考资料,并利用每一个小处理器缓存和管道特性等)保证能够产生比任何编译器更快的代码。

然而,如今在典型的应用程序中,这种差异并不重要。