了解汇编程序的原因之一是,有时可以使用汇编程序来编写比用高级语言(特别是C语言)编写的代码性能更好的代码。然而,我也听人说过很多次,尽管这并非完全错误,但实际上可以使用汇编程序来生成性能更好的代码的情况极其罕见,并且需要汇编方面的专业知识和经验。

这个问题甚至没有涉及到这样一个事实,即汇编程序指令将是特定于机器的、不可移植的,或者汇编程序的任何其他方面。当然,除了这一点之外,了解汇编还有很多很好的理由,但这是一个需要示例和数据的具体问题,而不是关于汇编程序与高级语言的扩展论述。

谁能提供一些具体的例子,说明使用现代编译器汇编代码比编写良好的C代码更快,并且您能否用分析证据支持这一说法?我相信这些案例确实存在,但我真的很想知道这些案例到底有多深奥,因为这似乎是一个有争议的问题。


当前回答

只有在使用编译器不支持的特殊用途指令集时。

为了最大限度地利用具有多个管道和预测分支的现代CPU的计算能力,您需要以这样一种方式来构造汇编程序:a)人类几乎不可能编写b)甚至更不可能维护。

此外,更好的算法、数据结构和内存管理将为您提供至少一个数量级的性能,而不是在汇编中进行的微观优化。

其他回答

我需要对192位或256位的每次中断进行移位操作,每50微秒发生一次。

它通过一个固定的映射(硬件限制)实现。使用C语言,制作它只需要大约10微秒。当我把它翻译到Assembler时,考虑到这个映射的特定特性,特定的寄存器缓存,并使用面向位的操作;它只花了不到3.5微秒的时间。

这很难具体地回答,因为这个问题非常不具体:到底什么是“现代编译器”?

理论上,几乎任何手动的汇编器优化都可以由编译器来完成——实际上它是否已经完成,不能笼统地说,只能说特定编译器的特定版本。许多可能需要花费大量的精力来确定它们是否可以在特定的上下文中应用而不产生副作用,以至于编译器编写者不会为它们烦恼。

简短的回答吗?有时。

从技术上讲,每一个抽象都有成本,而编程语言是CPU如何工作的抽象。然而C非常接近。几年前,我记得当我登录UNIX帐户并收到以下财富信息时(当时这种东西很流行),我笑出声来:

C程序设计语言——A 语言结合了 汇编语言的灵活性 汇编语言的强大。

这很有趣,因为这是真的:C就像可移植的汇编语言。

值得注意的是,汇编语言无论如何编写都可以运行。然而,在C语言和它生成的汇编语言之间有一个编译器,这是非常重要的,因为你的C代码有多快与你的编译器有多好有很大关系。

当gcc出现时,它如此受欢迎的原因之一是它通常比许多商业UNIX版本附带的C编译器要好得多。它不仅是ANSI C(没有任何K&R C的垃圾),更健壮,通常能产生更好(更快)的代码。不是总是,而是经常。

我告诉你这一切是因为没有关于C和汇编器速度的统一规则,因为C没有客观的标准。

同样地,汇编程序也会根据你正在运行的处理器、你的系统规格、你正在使用的指令集等而有很大的不同。历史上有两个CPU体系结构家族:CISC和RISC。CISC中最大的玩家过去是,现在仍然是Intel x86架构(和指令集)。RISC主宰了UNIX世界(MIPS6000、Alpha、Sparc等等)。CISC赢得了民心之战。

不管怎样,当我还是一个年轻的开发人员时,流行的观点是,手写的x86通常比C快得多,因为架构的工作方式,它的复杂性受益于人类的操作。另一方面,RISC似乎是为编译器设计的,所以没有人(我知道)写Sparc汇编器。我相信这样的人确实存在,但毫无疑问,他们现在都疯了,被送进了精神病院。

指令集是一个重要的点,即使在同一家族的处理器。某些英特尔处理器具有SSE到SSE4等扩展。AMD有他们自己的SIMD指令。像C这样的编程语言的好处是,人们可以编写他们的库,以便对您运行的任何处理器进行优化。这在汇编程序中是一项艰苦的工作。

你仍然可以在汇编程序中做一些编译器无法做的优化,一个编写良好的汇编程序算法将会和它的C等效程序一样快或更快。更大的问题是:这样做值得吗?

Ultimately though assembler was a product of its time and was more popular at a time when CPU cycles were expensive. Nowadays a CPU that costs $5-10 to manufacture (Intel Atom) can do pretty much anything anyone could want. The only real reason to write assembler these days is for low level things like some parts of an operating system (even so the vast majority of the Linux kernel is written in C), device drivers, possibly embedded devices (although C tends to dominate there too) and so on. Or just for kicks (which is somewhat masochistic).

以下是我个人经历中的几个例子:

Access to instructions that are not accessible from C. For instance, many architectures (like x86-64, IA-64, DEC Alpha, and 64-bit MIPS or PowerPC) support a 64 bit by 64 bit multiplication producing a 128 bit result. GCC recently added an extension providing access to such instructions, but before that assembly was required. And access to this instruction can make a huge difference on 64-bit CPUs when implementing something like RSA - sometimes as much as a factor of 4 improvement in performance. Access to CPU-specific flags. The one that has bitten me a lot is the carry flag; when doing a multiple-precision addition, if you don't have access to the CPU carry bit one must instead compare the result to see if it overflowed, which takes 3-5 more instructions per limb; and worse, which are quite serial in terms of data accesses, which kills performance on modern superscalar processors. When processing thousands of such integers in a row, being able to use addc is a huge win (there are superscalar issues with contention on the carry bit as well, but modern CPUs deal pretty well with it). SIMD. Even autovectorizing compilers can only do relatively simple cases, so if you want good SIMD performance it's unfortunately often necessary to write the code directly. Of course you can use intrinsics instead of assembly but once you're at the intrinsics level you're basically writing assembly anyway, just using the compiler as a register allocator and (nominally) instruction scheduler. (I tend to use intrinsics for SIMD simply because the compiler can generate the function prologues and whatnot for me so I can use the same code on Linux, OS X, and Windows without having to deal with ABI issues like function calling conventions, but other than that the SSE intrinsics really aren't very nice - the Altivec ones seem better though I don't have much experience with them). As examples of things a (current day) vectorizing compiler can't figure out, read about bitslicing AES or SIMD error correction - one could imagine a compiler that could analyze algorithms and generate such code, but it feels to me like such a smart compiler is at least 30 years away from existing (at best).

On the other hand, multicore machines and distributed systems have shifted many of the biggest performance wins in the other direction - get an extra 20% speedup writing your inner loops in assembly, or 300% by running them across multiple cores, or 10000% by running them across a cluster of machines. And of course high level optimizations (things like futures, memoization, etc) are often much easier to do in a higher level language like ML or Scala than C or asm, and often can provide a much bigger performance win. So, as always, there are tradeoffs to be made.

不需要给出任何具体的示例或分析器证据,当您比编译器知道的更多时,您可以编写比编译器更好的汇编程序。

In the general case, a modern C compiler knows much more about how to optimize the code in question: it knows how the processor pipeline works, it can try to reorder instructions quicker than a human can, and so on - it's basically the same as a computer being as good as or better than the best human player for boardgames, etc. simply because it can make searches within the problem space faster than most humans. Although you theoretically can perform as well as the computer in a specific case, you certainly can't do it at the same speed, making it infeasible for more than a few cases (i.e. the compiler will most certainly outperform you if you try to write more than a few routines in assembler).

另一方面,有些情况下编译器没有那么多的信息——我想说主要是在使用不同形式的外部硬件时,编译器不知道这些信息。主要的例子可能是设备驱动程序,其中汇编程序结合人类对相关硬件的熟悉知识可以产生比C编译器更好的结果。

其他人提到了特殊用途指令,这就是我在上面一段中所说的——编译器可能对这些指令了解有限或根本不了解,这使得人类可以编写更快的代码。