了解汇编程序的原因之一是,有时可以使用汇编程序来编写比用高级语言(特别是C语言)编写的代码性能更好的代码。然而,我也听人说过很多次,尽管这并非完全错误,但实际上可以使用汇编程序来生成性能更好的代码的情况极其罕见,并且需要汇编方面的专业知识和经验。

这个问题甚至没有涉及到这样一个事实,即汇编程序指令将是特定于机器的、不可移植的,或者汇编程序的任何其他方面。当然,除了这一点之外,了解汇编还有很多很好的理由,但这是一个需要示例和数据的具体问题,而不是关于汇编程序与高级语言的扩展论述。

谁能提供一些具体的例子,说明使用现代编译器汇编代码比编写良好的C代码更快,并且您能否用分析证据支持这一说法?我相信这些案例确实存在,但我真的很想知道这些案例到底有多深奥,因为这似乎是一个有争议的问题。


当前回答

以下是我个人经历中的几个例子:

Access to instructions that are not accessible from C. For instance, many architectures (like x86-64, IA-64, DEC Alpha, and 64-bit MIPS or PowerPC) support a 64 bit by 64 bit multiplication producing a 128 bit result. GCC recently added an extension providing access to such instructions, but before that assembly was required. And access to this instruction can make a huge difference on 64-bit CPUs when implementing something like RSA - sometimes as much as a factor of 4 improvement in performance. Access to CPU-specific flags. The one that has bitten me a lot is the carry flag; when doing a multiple-precision addition, if you don't have access to the CPU carry bit one must instead compare the result to see if it overflowed, which takes 3-5 more instructions per limb; and worse, which are quite serial in terms of data accesses, which kills performance on modern superscalar processors. When processing thousands of such integers in a row, being able to use addc is a huge win (there are superscalar issues with contention on the carry bit as well, but modern CPUs deal pretty well with it). SIMD. Even autovectorizing compilers can only do relatively simple cases, so if you want good SIMD performance it's unfortunately often necessary to write the code directly. Of course you can use intrinsics instead of assembly but once you're at the intrinsics level you're basically writing assembly anyway, just using the compiler as a register allocator and (nominally) instruction scheduler. (I tend to use intrinsics for SIMD simply because the compiler can generate the function prologues and whatnot for me so I can use the same code on Linux, OS X, and Windows without having to deal with ABI issues like function calling conventions, but other than that the SSE intrinsics really aren't very nice - the Altivec ones seem better though I don't have much experience with them). As examples of things a (current day) vectorizing compiler can't figure out, read about bitslicing AES or SIMD error correction - one could imagine a compiler that could analyze algorithms and generate such code, but it feels to me like such a smart compiler is at least 30 years away from existing (at best).

On the other hand, multicore machines and distributed systems have shifted many of the biggest performance wins in the other direction - get an extra 20% speedup writing your inner loops in assembly, or 300% by running them across multiple cores, or 10000% by running them across a cluster of machines. And of course high level optimizations (things like futures, memoization, etc) are often much easier to do in a higher level language like ML or Scala than C or asm, and often can provide a much bigger performance win. So, as always, there are tradeoffs to be made.

其他回答

很多年前,我教别人用c语言编程。练习是将图形旋转90度。他得到了一个花了几分钟才能完成的解,主要是因为他使用了乘法和除法等。

我向他展示了如何使用位移位重定义问题,在他拥有的非优化编译器上,处理时间缩短到大约30秒。

我刚刚得到了一个优化编译器,相同的代码在< 5秒内旋转图形。我看着编译器生成的汇编代码,从我所看到的,我决定我写汇编程序的日子结束了。

我需要对192位或256位的每次中断进行移位操作,每50微秒发生一次。

它通过一个固定的映射(硬件限制)实现。使用C语言,制作它只需要大约10微秒。当我把它翻译到Assembler时,考虑到这个映射的特定特性,特定的寄存器缓存,并使用面向位的操作;它只花了不到3.5微秒的时间。

C语言常常需要做一些从汇编编码员的角度看来不必要的事情,这只是因为C标准这么说。

例如,整数提升。如果你想在C语言中移动一个char变量,人们通常会期望代码实际上只做一个比特的移动。

然而,标准强制编译器在移位之前将符号扩展为int,然后将结果截断为char,这可能会使代码复杂化,这取决于目标处理器的架构。

Actually you can build large scale programs in a large model mode segaments may be restricted to 64kb code but you can write many segaments, people give the argument against ASM as it is an old language and we don't need to preserve memory anymore, If that were the case why would we be packing our PC's with memory, the only Flaw I can find with ASM is that it is more or less Processor based so most programs written for the intel architecture Most likely would not run on An AMD Architecture. As for C being faster than ASM there is no language faster than ASM and ASM can do many thing's C and other HLL's can not do at processor level. ASM is a difficult language to learn but once you learn it no HLL can translate it better than you. If you could only see some of the things HLL's Do to you code, and understand what it is doing, you would wonder why More people don't use ASM and why assembers are no longer being updated ( For general public use anyway). So no C is not faster than ASM. Even experiences C++ programmers still use and write code Chunks in ASM added to there C++ code for speed. Other Languages Also that some people think are obsolete or possibly no good is a myth at times for instance Photoshop is written in Pascal/ASM 1st release of souce has been submitted to the technical history museum, and paintshop pro is written still written in Python,TCL and ASM ... a common denominator of these to "Fast and Great image processors is ASM, although photoshop may have Upgraded to delphi now it is still pascal. and any speed problems are comming from pascal but this is because we like the way programs look and not what they do now days. I would like to make a Photoshop Clone in pure ASM which I have been working on and its comming along rather well. not code,interpret,arange,rewwrite,etc.... Just code and go process complete.

在历史上插话。

当我还年轻的时候(20世纪70年代),根据我的经验,汇编是很重要的,更重要的是代码的大小,而不是代码的速度。

如果一个高级语言的模块是1300字节的代码,但该模块的汇编版本是300字节,那么当您试图将应用程序装入16K或32K的内存时,这1K字节就非常重要。

那时候编译器还不是很好。

在老式的Fortran中

X = (Y - Z)
IF (X .LT. 0) THEN
 ... do something
ENDIF

当时的编译器在X上执行了一个SUBTRACT指令,然后是一个TEST指令。 在汇编程序中,您只需在减法之后检查条件代码(LT零,零,GT零)。

对于现代系统和编译器来说,这些都不是问题。

我认为理解编译器在做什么仍然很重要。 当您使用高级语言编写代码时,您应该了解什么允许或阻止编译器执行循环展开。

当编译器执行“类似分支”的操作时,使用管道内衬和包含条件的前瞻计算。

当执行高级语言不允许的事情时,仍然需要汇编程序,比如读取或写入处理器特定的寄存器。

但在很大程度上,普通程序员不再需要它,除非对代码如何编译和执行有基本的了解。