了解汇编程序的原因之一是,有时可以使用汇编程序来编写比用高级语言(特别是C语言)编写的代码性能更好的代码。然而,我也听人说过很多次,尽管这并非完全错误,但实际上可以使用汇编程序来生成性能更好的代码的情况极其罕见,并且需要汇编方面的专业知识和经验。

这个问题甚至没有涉及到这样一个事实,即汇编程序指令将是特定于机器的、不可移植的,或者汇编程序的任何其他方面。当然,除了这一点之外,了解汇编还有很多很好的理由,但这是一个需要示例和数据的具体问题,而不是关于汇编程序与高级语言的扩展论述。

谁能提供一些具体的例子,说明使用现代编译器汇编代码比编写良好的C代码更快,并且您能否用分析证据支持这一说法?我相信这些案例确实存在,但我真的很想知道这些案例到底有多深奥,因为这似乎是一个有争议的问题。


当前回答

我很惊讶居然没人这么说。如果用汇编编写strlen()函数,速度会快得多!在C中,你能做的最好的事情就是

int c;
for(c = 0; str[c] != '\0'; c++) {}

在组装过程中,你可以大大加快速度:

mov esi, offset string
mov edi, esi
xor ecx, ecx

lp:
mov ax, byte ptr [esi]
cmp al, cl
je  end_1
cmp ah, cl
je end_2
mov bx, byte ptr [esi + 2]
cmp bl, cl
je end_3
cmp bh, cl
je end_4
add esi, 4
jmp lp

end_4:
inc esi

end_3:
inc esi

end_2:
inc esi

end_1:
inc esi

mov ecx, esi
sub ecx, edi

长度单位是ecx。这一次比较4个字符,所以速度快4倍。并且考虑使用eax和ebx的高阶词,它将比之前的C例程快8倍!

其他回答

我想说的是,当你比编译器更擅长一组给定的指令时。所以我认为没有通用的答案

我曾经和一个人一起工作过,他说“如果编译器笨到不能弄清楚你要做什么,并且不能优化它,那么你的编译器就坏了,是时候换一个新的了”。我确信在某些情况下汇编程序会打败你的C代码,但是如果你发现自己经常使用汇编程序来“赢得”编译器,那么你的编译器就完蛋了。

对于编写试图强制查询计划器执行操作的“优化”SQL也是如此。如果您发现自己重新安排查询以让计划器执行您想要的操作,那么您的查询计划器就完蛋了——请更换一个新的计划器。

简短的回答吗?有时。

从技术上讲,每一个抽象都有成本,而编程语言是CPU如何工作的抽象。然而C非常接近。几年前,我记得当我登录UNIX帐户并收到以下财富信息时(当时这种东西很流行),我笑出声来:

C程序设计语言——A 语言结合了 汇编语言的灵活性 汇编语言的强大。

这很有趣,因为这是真的:C就像可移植的汇编语言。

值得注意的是,汇编语言无论如何编写都可以运行。然而,在C语言和它生成的汇编语言之间有一个编译器,这是非常重要的,因为你的C代码有多快与你的编译器有多好有很大关系。

当gcc出现时,它如此受欢迎的原因之一是它通常比许多商业UNIX版本附带的C编译器要好得多。它不仅是ANSI C(没有任何K&R C的垃圾),更健壮,通常能产生更好(更快)的代码。不是总是,而是经常。

我告诉你这一切是因为没有关于C和汇编器速度的统一规则,因为C没有客观的标准。

同样地,汇编程序也会根据你正在运行的处理器、你的系统规格、你正在使用的指令集等而有很大的不同。历史上有两个CPU体系结构家族:CISC和RISC。CISC中最大的玩家过去是,现在仍然是Intel x86架构(和指令集)。RISC主宰了UNIX世界(MIPS6000、Alpha、Sparc等等)。CISC赢得了民心之战。

不管怎样,当我还是一个年轻的开发人员时,流行的观点是,手写的x86通常比C快得多,因为架构的工作方式,它的复杂性受益于人类的操作。另一方面,RISC似乎是为编译器设计的,所以没有人(我知道)写Sparc汇编器。我相信这样的人确实存在,但毫无疑问,他们现在都疯了,被送进了精神病院。

指令集是一个重要的点,即使在同一家族的处理器。某些英特尔处理器具有SSE到SSE4等扩展。AMD有他们自己的SIMD指令。像C这样的编程语言的好处是,人们可以编写他们的库,以便对您运行的任何处理器进行优化。这在汇编程序中是一项艰苦的工作。

你仍然可以在汇编程序中做一些编译器无法做的优化,一个编写良好的汇编程序算法将会和它的C等效程序一样快或更快。更大的问题是:这样做值得吗?

Ultimately though assembler was a product of its time and was more popular at a time when CPU cycles were expensive. Nowadays a CPU that costs $5-10 to manufacture (Intel Atom) can do pretty much anything anyone could want. The only real reason to write assembler these days is for low level things like some parts of an operating system (even so the vast majority of the Linux kernel is written in C), device drivers, possibly embedded devices (although C tends to dominate there too) and so on. Or just for kicks (which is somewhat masochistic).

以下是我个人经历中的几个例子:

Access to instructions that are not accessible from C. For instance, many architectures (like x86-64, IA-64, DEC Alpha, and 64-bit MIPS or PowerPC) support a 64 bit by 64 bit multiplication producing a 128 bit result. GCC recently added an extension providing access to such instructions, but before that assembly was required. And access to this instruction can make a huge difference on 64-bit CPUs when implementing something like RSA - sometimes as much as a factor of 4 improvement in performance. Access to CPU-specific flags. The one that has bitten me a lot is the carry flag; when doing a multiple-precision addition, if you don't have access to the CPU carry bit one must instead compare the result to see if it overflowed, which takes 3-5 more instructions per limb; and worse, which are quite serial in terms of data accesses, which kills performance on modern superscalar processors. When processing thousands of such integers in a row, being able to use addc is a huge win (there are superscalar issues with contention on the carry bit as well, but modern CPUs deal pretty well with it). SIMD. Even autovectorizing compilers can only do relatively simple cases, so if you want good SIMD performance it's unfortunately often necessary to write the code directly. Of course you can use intrinsics instead of assembly but once you're at the intrinsics level you're basically writing assembly anyway, just using the compiler as a register allocator and (nominally) instruction scheduler. (I tend to use intrinsics for SIMD simply because the compiler can generate the function prologues and whatnot for me so I can use the same code on Linux, OS X, and Windows without having to deal with ABI issues like function calling conventions, but other than that the SSE intrinsics really aren't very nice - the Altivec ones seem better though I don't have much experience with them). As examples of things a (current day) vectorizing compiler can't figure out, read about bitslicing AES or SIMD error correction - one could imagine a compiler that could analyze algorithms and generate such code, but it feels to me like such a smart compiler is at least 30 years away from existing (at best).

On the other hand, multicore machines and distributed systems have shifted many of the biggest performance wins in the other direction - get an extra 20% speedup writing your inner loops in assembly, or 300% by running them across multiple cores, or 10000% by running them across a cluster of machines. And of course high level optimizations (things like futures, memoization, etc) are often much easier to do in a higher level language like ML or Scala than C or asm, and often can provide a much bigger performance win. So, as always, there are tradeoffs to be made.

我认为汇编程序更快的一般情况是,当一个聪明的汇编程序员看到编译器的输出并说“这是性能的关键路径,我可以写这个更有效”,然后那个人调整汇编程序或从头重写它。