曾经,为了编写x86汇编程序,例如,你会有这样的指令:“用值5加载EDX寄存器”,“增加EDX”寄存器,等等。

对于拥有4核(甚至更多)的现代cpu,在机器代码级别上,它是否看起来就像有4个独立的cpu(即只有4个不同的“EDX”寄存器)?如果是这样,当你说“增加EDX寄存器”时,是什么决定哪个CPU的EDX寄存器被增加?现在在x86汇编器中有“CPU上下文”或“线程”概念吗?

内核之间的通信/同步是如何工作的?

如果您正在编写一个操作系统,通过硬件公开的什么机制允许您在不同的内核上调度执行?是一些特殊的特权指令吗?

如果你正在为一个多核CPU编写一个优化编译器/字节码虚拟机,你需要特别了解什么,比如说,x86,以使它生成跨所有核高效运行的代码?

为了支持多核功能,x86机器码做了哪些改变?


当前回答

汇编代码将转换为将在一个核心上执行的机器代码。如果你希望它是多线程的,你将不得不使用操作系统原语在不同的处理器上多次启动这段代码,或者在不同的核上启动不同的代码段——每个核将执行一个单独的线程。每个线程只能看到当前正在执行的一个内核。

其他回答

汇编代码将转换为将在一个核心上执行的机器代码。如果你希望它是多线程的,你将不得不使用操作系统原语在不同的处理器上多次启动这段代码,或者在不同的核上启动不同的代码段——每个核将执行一个单独的线程。每个线程只能看到当前正在执行的一个内核。

非官方的SMP FAQ

曾经,为了编写x86汇编程序,例如,你会有这样的指令:“用值5加载EDX寄存器”,“增加EDX”寄存器,等等。对于拥有4核(甚至更多)的现代cpu,在机器代码级别上,它是否看起来就像有4个独立的cpu(即只有4个不同的“EDX”寄存器)?

完全正确。有4组寄存器,包括4个单独的指令指针。

如果是这样,当你说“增加EDX寄存器”时,是什么决定哪个CPU的EDX寄存器被增加?

当然是执行指令的CPU。可以把它想象成4个完全不同的微处理器共享相同的内存。

现在在x86汇编器中有“CPU上下文”或“线程”概念吗?

不。汇编程序只是像往常一样翻译指令。没有变化。

内核之间的通信/同步是如何工作的?

由于它们共享相同的内存,这主要是程序逻辑的问题。虽然现在有一个处理器间中断机制,但它不是必要的,最初也没有出现在第一个双cpu x86系统中。

如果您正在编写一个操作系统,通过硬件公开的什么机制允许您在不同的内核上调度执行?

The scheduler actually doesn't change, except that it is slightly more carefully about critical sections and the types of locks used. Before SMP, kernel code would eventually call the scheduler, which would look at the run queue and pick a process to run as the next thread. (Processes to the kernel look a lot like threads.) The SMP kernel runs the exact same code, one thread at a time, it's just that now critical section locking needs to be SMP-safe to be sure two cores can't accidentally pick the same PID.

是一些特殊的特权指令吗?

不。这些核心都运行在相同的内存中,使用相同的旧指令。

如果你正在为一个多核CPU编写一个优化编译器/字节码虚拟机,你需要特别了解什么,比如说,x86,以使它生成跨所有核高效运行的代码?

运行与之前相同的代码。需要改变的是Unix或Windows内核。

你可以把我的问题总结为“为了支持多核功能,x86机器码做了哪些改变?”

没有什么是必要的。第一个SMP系统使用与单处理器完全相同的指令集。现在,x86体系结构已经有了很大的改进,并且有了大量的新指令来让事情变得更快,但是对于SMP来说没有一个是必要的。

For more information, see the Intel Multiprocessor Specification. Update: all the follow-up questions can be answered by just completely accepting that an n-way multicore CPU is almost1 exactly the same thing as n separate processors that just share the same memory.2 There was an important question not asked: how is a program written to run on more than one core for more performance? And the answer is: it is written using a thread library like Pthreads. Some thread libraries use "green threads" that are not visible to the OS, and those won't get separate cores, but as long as the thread library uses kernel thread features then your threaded program will automatically be multicore. 1. For backwards compatibility, only the first core starts up at reset, and a few driver-type things need to be done to fire up the remaining ones.2. They also share all the peripherals, naturally.

I think the questioner probably wants to make a program run faster by having multiple cores work on it in parallel. That's what I would want anyway but all the answers leave me no wiser. However, I think I get this: You can't synchronize different threads down to instruction execution time accuracy. So you can't get 4 cores to do a multiply on four different array elements in parallel to speed up processing by 4:1. Rather, you have to look at your program as comprising major blocks that execute sequentially like

对一些数据做FFT吗 把结果放到一个矩阵中,然后找出它的特征值和特征向量 根据特征值对后者进行排序 用新的数据重复第一步

What you can do is run step 2 on the results of step 1 while running step one in a different core on new data, and running step 3 on the results of step2 in a different core while step 2 is running on the next data and step 1 is running on the data after that. You can do this in Compaq Visual Fortran and Intel Fortran which is an evolution of CVF by writing three separate programs/ subroutines for the three steps and instead of one "calling" the next it calls an API to start its thread. They can share data by using COMMON which will be COMMON data memory to all threads. You have to study the manual till your head hurts and experiment until you get it to work but I have succeeded once at least.

根据我的理解,每个“核心”都是一个完整的处理器,有自己的寄存器集。基本上,BIOS启动时只运行一个核心,然后操作系统可以通过初始化其他核心并将它们指向要运行的代码等方式“启动”其他核心。

同步由操作系统完成。通常,每个处理器为操作系统运行不同的进程,因此操作系统的多线程功能负责决定哪个进程可以访问哪个内存,以及在内存碰撞的情况下该做什么。

这根本不是在机器指令中完成的;这些核心假装是不同的cpu,并且没有任何相互通信的特殊能力。他们有两种沟通方式:

它们共享物理地址空间。硬件处理缓存一致性,因此一个CPU写入另一个CPU读取的内存地址。 它们共享一个APIC(可编程中断控制器)。这是映射到物理地址空间的内存,一个处理器可以使用它来控制其他处理器,打开或关闭它们,发送中断等等。

http://www.cheesecake.org/sac/smp.html是一个很好的参考,但url有点傻。