曾经,为了编写x86汇编程序,例如,你会有这样的指令:“用值5加载EDX寄存器”,“增加EDX”寄存器,等等。

对于拥有4核(甚至更多)的现代cpu,在机器代码级别上,它是否看起来就像有4个独立的cpu(即只有4个不同的“EDX”寄存器)?如果是这样,当你说“增加EDX寄存器”时,是什么决定哪个CPU的EDX寄存器被增加?现在在x86汇编器中有“CPU上下文”或“线程”概念吗?

内核之间的通信/同步是如何工作的?

如果您正在编写一个操作系统,通过硬件公开的什么机制允许您在不同的内核上调度执行?是一些特殊的特权指令吗?

如果你正在为一个多核CPU编写一个优化编译器/字节码虚拟机,你需要特别了解什么,比如说,x86,以使它生成跨所有核高效运行的代码?

为了支持多核功能,x86机器码做了哪些改变?


当前回答

这根本不是在机器指令中完成的;这些核心假装是不同的cpu,并且没有任何相互通信的特殊能力。他们有两种沟通方式:

它们共享物理地址空间。硬件处理缓存一致性,因此一个CPU写入另一个CPU读取的内存地址。 它们共享一个APIC(可编程中断控制器)。这是映射到物理地址空间的内存,一个处理器可以使用它来控制其他处理器,打开或关闭它们,发送中断等等。

http://www.cheesecake.org/sac/smp.html是一个很好的参考,但url有点傻。

其他回答

汇编代码将转换为将在一个核心上执行的机器代码。如果你希望它是多线程的,你将不得不使用操作系统原语在不同的处理器上多次启动这段代码,或者在不同的核上启动不同的代码段——每个核将执行一个单独的线程。每个线程只能看到当前正在执行的一个内核。

根据我的理解,每个“核心”都是一个完整的处理器,有自己的寄存器集。基本上,BIOS启动时只运行一个核心,然后操作系统可以通过初始化其他核心并将它们指向要运行的代码等方式“启动”其他核心。

同步由操作系统完成。通常,每个处理器为操作系统运行不同的进程,因此操作系统的多线程功能负责决定哪个进程可以访问哪个内存,以及在内存碰撞的情况下该做什么。

每个核心从不同的内存区域执行。你的操作系统将把一个核心指向你的程序,这个核心将执行你的程序。你的程序不会知道有多个核或者它在哪个核上执行。

也没有仅用于操作系统的附加指令。这些核心与单核芯片是相同的。每个内核运行操作系统的一部分,该部分将处理与用于信息交换的公共内存区域的通信,以查找下一个要执行的内存区域。

这是一个简化,但它给了你基本的想法,它是如何做到的。更多关于多核和多处理器的信息在Embedded.com上有很多关于这个主题的信息…这个话题很快就变得复杂起来!

What has been added on every multiprocessing-capable architecture compared to the single-processor variants that came before them are instructions to synchronize between cores. Also, you have instructions to deal with cache coherency, flushing buffers, and similar low-level operations an OS has to deal with. In the case of simultaneous multithreaded architectures like IBM POWER6, IBM Cell, Sun Niagara, and Intel "Hyperthreading", you also tend to see new instructions to prioritize between threads (like setting priorities and explicitly yielding the processor when there is nothing to do).

但是基本的单线程语义是相同的,您只是添加额外的设施来处理与其他核心的同步和通信。

The main difference between a single- and a multi-threaded application is that the former has one stack and the latter has one for each thread. Code is generated somewhat differently since the compiler will assume that the data and stack segment registers (ds and ss) are not equal. This means that indirection through the ebp and esp registers that default to the ss register won't also default to ds (because ds!=ss). Conversely, indirection through the other registers which default to ds won't default to ss.

The threads share everything else including data and code areas. They also share lib routines so make sure that they are thread-safe. A procedure that sorts an area in RAM can be multi-threaded to speed things up. The threads will then be accessing, comparing and ordering data in the same physical memory area and executing the same code but using different local variables to control their respective part of the sort. This of course is because the threads have different stacks where the local variables are contained. This type of programming requires careful tuning of the code so that inter-core data collisions (in caches and RAM) are reduced which in turn results in a code which is faster with two or more threads than it is with just one. Of course, an un-tuned code will often be faster with one processor than with two or more. To debug is more challenging because the standard "int 3" breakpoint will not be applicable since you want to interrupt a specific thread and not all of them. Debug register breakpoints do not solve this problem either unless you can set them on the specific processor executing the specific thread you want to interrupt.

其他多线程代码可能涉及在程序的不同部分运行的不同线程。这种类型的编程不需要同样的调优,因此更容易学习。