了解汇编程序的原因之一是,有时可以使用汇编程序来编写比用高级语言(特别是C语言)编写的代码性能更好的代码。然而,我也听人说过很多次,尽管这并非完全错误,但实际上可以使用汇编程序来生成性能更好的代码的情况极其罕见,并且需要汇编方面的专业知识和经验。

这个问题甚至没有涉及到这样一个事实,即汇编程序指令将是特定于机器的、不可移植的,或者汇编程序的任何其他方面。当然,除了这一点之外,了解汇编还有很多很好的理由,但这是一个需要示例和数据的具体问题,而不是关于汇编程序与高级语言的扩展论述。

谁能提供一些具体的例子,说明使用现代编译器汇编代码比编写良好的C代码更快,并且您能否用分析证据支持这一说法?我相信这些案例确实存在,但我真的很想知道这些案例到底有多深奥,因为这似乎是一个有争议的问题。


当前回答

我认为汇编程序更快的一般情况是,当一个聪明的汇编程序员看到编译器的输出并说“这是性能的关键路径,我可以写这个更有效”,然后那个人调整汇编程序或从头重写它。

其他回答

我已经阅读了所有的答案(超过30个),并没有找到一个简单的原因:如果你读过并练习过Intel®64和IA-32架构优化参考手册,汇编程序比C更快,所以汇编程序可能更慢的原因是编写这种慢汇编程序的人没有阅读优化手册。

In the good old days of Intel 80286, each instruction was executed at a fixed count of CPU cycles. Still, since Pentium Pro, released in 1995, Intel processors became superscalar, utilizing Complex Pipelining: Out-of-Order Execution & Register Renaming. Before that, on Pentium, produced in 1993, there were U and V pipelines. Therefore, Pentium introduced dual pipelines that could execute two simple instructions at one clock cycle if they didn't depend on one another. However, this was nothing compared with the Out-of-Order Execution & Register Renaming that appeared in Pentium Pro. This approach introduced in Pentium Pro is practically the same nowadays on most recent Intel processors.

Let me explain the Out-of-Order Execution in a few words. The fastest code is where instructions do not depend on previous results, e.g., you should always clear whole registers (by movzx) to remove dependency from previous values of the registers you are working with, so they may be renamed internally by the CPU to allow instruction execute in parallel or in a different order. Or, on some processors, false dependency may exist that may also slow things down, like false dependency on Pentium 4 for inc/dec, so you may wish to use add eax, 1 instead or inc eax to remove dependency on the previous state of the flags.

如果时间允许,您可以阅读更多无序执行和注册重命名。因特网上有大量的信息。

There are also many other essential issues like branch prediction, number of load and store units, number of gates that execute micro-ops, memory cache coherence protocols, etc., but the crucial thing to consider is the Out-of-Order Execution. Most people are simply not aware of the Out-of-Order Execution. Therefore, they write their assembly programs like for 80286, expecting their instructions will take a fixed time to execute regardless of the context. At the same time, C compilers are aware of the Out-of-Order Execution and generate the code correctly. That's why the code of such uninformed people is slower, but if you become knowledgeable, your code will be faster.

除了乱序执行之外,还有很多优化技巧和技巧。请阅读上面提到的优化手册:-)

However, assembly language has its own drawbacks when it comes to optimization. According to Peter Cordes (see the comment below), some of the optimizations compilers do would be unmaintainable for large code-bases in hand-written assembly. For example, suppose you write in assembly. In that case, you need to completely change an inline function (an assembly macro) when it inlines into a function that calls it with some arguments being constants. At the same time, a C compiler makes its job a lot simpler—and inlining the same code in different ways into different call sites. There is a limit to what you can do with assembly macros. So to get the same benefit, you'd have to manually optimize the same logic in each place to match the constants and available registers you have.

不需要给出任何具体的示例或分析器证据,当您比编译器知道的更多时,您可以编写比编译器更好的汇编程序。

In the general case, a modern C compiler knows much more about how to optimize the code in question: it knows how the processor pipeline works, it can try to reorder instructions quicker than a human can, and so on - it's basically the same as a computer being as good as or better than the best human player for boardgames, etc. simply because it can make searches within the problem space faster than most humans. Although you theoretically can perform as well as the computer in a specific case, you certainly can't do it at the same speed, making it infeasible for more than a few cases (i.e. the compiler will most certainly outperform you if you try to write more than a few routines in assembler).

另一方面,有些情况下编译器没有那么多的信息——我想说主要是在使用不同形式的外部硬件时,编译器不知道这些信息。主要的例子可能是设备驱动程序,其中汇编程序结合人类对相关硬件的熟悉知识可以产生比C编译器更好的结果。

其他人提到了特殊用途指令,这就是我在上面一段中所说的——编译器可能对这些指令了解有限或根本不了解,这使得人类可以编写更快的代码。

我很惊讶居然没人这么说。如果用汇编编写strlen()函数,速度会快得多!在C中,你能做的最好的事情就是

int c;
for(c = 0; str[c] != '\0'; c++) {}

在组装过程中,你可以大大加快速度:

mov esi, offset string
mov edi, esi
xor ecx, ecx

lp:
mov ax, byte ptr [esi]
cmp al, cl
je  end_1
cmp ah, cl
je end_2
mov bx, byte ptr [esi + 2]
cmp bl, cl
je end_3
cmp bh, cl
je end_4
add esi, 4
jmp lp

end_4:
inc esi

end_3:
inc esi

end_2:
inc esi

end_1:
inc esi

mov ecx, esi
sub ecx, edi

长度单位是ecx。这一次比较4个字符,所以速度快4倍。并且考虑使用eax和ebx的高阶词,它将比之前的C例程快8倍!

我想说的是,当你比编译器更擅长一组给定的指令时。所以我认为没有通用的答案

长波克,只有一个限制时间。当你没有足够的资源来优化每一个代码的变化,并花时间分配寄存器,优化一些溢出和诸如此类的事情时,编译器每次都会赢。对代码进行修改、重新编译和度量。如有必要重复。

此外,你可以在高水平方面做很多事情。此外,检查生成的程序集可能会给人一种代码是垃圾的印象,但实际上它的运行速度比您想象的要快。例子:

Int y = data[i]; //在这里做一些事情。 call_function (y,…);

编译器将读取数据,将其推入堆栈(溢出),然后从堆栈读取并作为参数传递。听起来屎?它实际上可能是非常有效的延迟补偿,并导致更快的运行时。

//优化版本 call_function(数据[我],…);//毕竟不是那么优化。

优化版本的想法是,我们降低了寄存器压力,避免溢出。但事实上,“垃圾”版本更快!

看看汇编代码,只看指令,然后得出结论:指令越多,速度越慢,这将是一个错误的判断。

这里需要注意的是:许多组装专家认为他们知道很多,但知道的很少。规则也会随着架构的变化而变化。例如,x86代码并不存在总是最快的银弹。如今,最好还是按照经验法则行事:

记忆很慢 缓存速度快 尽量更好地使用缓存 你多久会错过一次?你有延迟补偿策略吗? 对于一个cache miss,你可以执行10-100个ALU/FPU/SSE指令 应用程序架构很重要。 . .但是当问题不在架构上时,它就没有帮助了

此外,过于相信编译器会神奇地将考虑不周到的C/ c++代码转换为“理论上最优”的代码是一厢情愿的想法。如果你关心这个低级别的“性能”,你必须知道你使用的编译器和工具链。

C/ c++中的编译器通常不太擅长重新排序子表达式,因为对于初学者来说,函数有副作用。函数式语言没有受到这个警告的影响,但它不太适合当前的生态系统。有一些编译器选项可以允许宽松的精确规则,允许编译器/链接器/代码生成器改变操作的顺序。

这个话题有点死路一条;对于大多数人来说,这是无关紧要的,而剩下的人,他们已经知道自己在做什么了。

这一切都归结为:“理解你在做什么”,这与知道你在做什么有点不同。