曾经,为了编写x86汇编程序,例如,你会有这样的指令:“用值5加载EDX寄存器”,“增加EDX”寄存器,等等。

对于拥有4核(甚至更多)的现代cpu,在机器代码级别上,它是否看起来就像有4个独立的cpu(即只有4个不同的“EDX”寄存器)?如果是这样,当你说“增加EDX寄存器”时,是什么决定哪个CPU的EDX寄存器被增加?现在在x86汇编器中有“CPU上下文”或“线程”概念吗?

内核之间的通信/同步是如何工作的?

如果您正在编写一个操作系统,通过硬件公开的什么机制允许您在不同的内核上调度执行?是一些特殊的特权指令吗?

如果你正在为一个多核CPU编写一个优化编译器/字节码虚拟机,你需要特别了解什么,比如说,x86,以使它生成跨所有核高效运行的代码?

为了支持多核功能,x86机器码做了哪些改变?


当前回答

根据我的理解,每个“核心”都是一个完整的处理器,有自己的寄存器集。基本上,BIOS启动时只运行一个核心,然后操作系统可以通过初始化其他核心并将它们指向要运行的代码等方式“启动”其他核心。

同步由操作系统完成。通常,每个处理器为操作系统运行不同的进程,因此操作系统的多线程功能负责决定哪个进程可以访问哪个内存,以及在内存碰撞的情况下该做什么。

其他回答

这根本不是在机器指令中完成的;这些核心假装是不同的cpu,并且没有任何相互通信的特殊能力。他们有两种沟通方式:

它们共享物理地址空间。硬件处理缓存一致性,因此一个CPU写入另一个CPU读取的内存地址。 它们共享一个APIC(可编程中断控制器)。这是映射到物理地址空间的内存,一个处理器可以使用它来控制其他处理器,打开或关闭它们,发送中断等等。

http://www.cheesecake.org/sac/smp.html是一个很好的参考,但url有点傻。

每个核心从不同的内存区域执行。你的操作系统将把一个核心指向你的程序,这个核心将执行你的程序。你的程序不会知道有多个核或者它在哪个核上执行。

也没有仅用于操作系统的附加指令。这些核心与单核芯片是相同的。每个内核运行操作系统的一部分,该部分将处理与用于信息交换的公共内存区域的通信,以查找下一个要执行的内存区域。

这是一个简化,但它给了你基本的想法,它是如何做到的。更多关于多核和多处理器的信息在Embedded.com上有很多关于这个主题的信息…这个话题很快就变得复杂起来!

Intel x86最小可运行的裸金属示例

可运行的裸露金属的例子,所有所需的样板。下面将介绍所有主要部分。

在Ubuntu 15.10 QEMU 2.3.0和联想ThinkPad T400真实硬件客户端上测试。

英特尔手册第3卷系统编程指南- 325384-056US 2015年9月涵盖SMP第8章,第9章和第10章。

表8 - 1。“Broadcast INIT-SIPI-SIPI序列和超时的选择”包含了一个基本工作的示例:

MOV ESI, ICR_LOW    ; Load address of ICR low dword into ESI.
MOV EAX, 000C4500H  ; Load ICR encoding for broadcast INIT IPI
                    ; to all APs into EAX.
MOV [ESI], EAX      ; Broadcast INIT IPI to all APs
; 10-millisecond delay loop.
MOV EAX, 000C46XXH  ; Load ICR encoding for broadcast SIPI IP
                    ; to all APs into EAX, where xx is the vector computed in step 10.
MOV [ESI], EAX      ; Broadcast SIPI IPI to all APs
; 200-microsecond delay loop
MOV [ESI], EAX      ; Broadcast second SIPI IPI to all APs
                    ; Waits for the timer interrupt until the timer expires

在代码中:

Most operating systems will make most of those operations impossible from ring 3 (user programs). So you need to write your own kernel to play freely with it: a userland Linux program will not work. At first, a single processor runs, called the bootstrap processor (BSP). It must wake up the other ones (called Application Processors (AP)) through special interrupts called Inter Processor Interrupts (IPI). Those interrupts can be done by programming Advanced Programmable Interrupt Controller (APIC) through the Interrupt command register (ICR) The format of the ICR is documented at: 10.6 "ISSUING INTERPROCESSOR INTERRUPTS" The IPI happens as soon as we write to the ICR. ICR_LOW is defined at 8.4.4 "MP Initialization Example" as: ICR_LOW EQU 0FEE00300H The magic value 0FEE00300 is the memory address of the ICR, as documented at Table 10-1 "Local APIC Register Address Map" The simplest possible method is used in the example: it sets up the ICR to send broadcast IPIs which are delivered to all other processors except the current one. But it is also possible, and recommended by some, to get information about the processors through special data structures setup by the BIOS like ACPI tables or Intel's MP configuration table and only wake up the ones you need one by one. XX in 000C46XXH encodes the address of the first instruction that the processor will execute as: CS = XX * 0x100 IP = 0 Remember that CS multiples addresses by 0x10, so the actual memory address of the first instruction is: XX * 0x1000 So if for example XX == 1, the processor will start at 0x1000. We must then ensure that there is 16-bit real mode code to be run at that memory location, e.g. with: cld mov $init_len, %ecx mov $init, %esi mov 0x1000, %edi rep movsb .code16 init: xor %ax, %ax mov %ax, %ds /* Do stuff. */ hlt .equ init_len, . - init Using a linker script is another possibility. The delay loops are an annoying part to get working: there is no super simple way to do such sleeps precisely. Possible methods include: PIT (used in my example) HPET calibrate the time of a busy loop with the above, and use it instead Related: How to display a number on the screen and and sleep for one second with DOS x86 assembly? I think the initial processor needs to be in protected mode for this to work as we write to address 0FEE00300H which is too high for 16-bits To communicate between processors, we can use a spinlock on the main process, and modify the lock from the second core. We should ensure that memory write back is done, e.g. through wbinvd.

处理器之间的共享状态

8.7.1“逻辑处理器的状态”说:

The following features are part of the architectural state of logical processors within Intel 64 or IA-32 processors supporting Intel Hyper-Threading Technology. The features can be subdivided into three groups: Duplicated for each logical processor Shared by logical processors in a physical processor Shared or duplicated, depending on the implementation The following features are duplicated for each logical processor: General purpose registers (EAX, EBX, ECX, EDX, ESI, EDI, ESP, and EBP) Segment registers (CS, DS, SS, ES, FS, and GS) EFLAGS and EIP registers. Note that the CS and EIP/RIP registers for each logical processor point to the instruction stream for the thread being executed by the logical processor. x87 FPU registers (ST0 through ST7, status word, control word, tag word, data operand pointer, and instruction pointer) MMX registers (MM0 through MM7) XMM registers (XMM0 through XMM7) and the MXCSR register Control registers and system table pointer registers (GDTR, LDTR, IDTR, task register) Debug registers (DR0, DR1, DR2, DR3, DR6, DR7) and the debug control MSRs Machine check global status (IA32_MCG_STATUS) and machine check capability (IA32_MCG_CAP) MSRs Thermal clock modulation and ACPI Power management control MSRs Time stamp counter MSRs Most of the other MSR registers, including the page attribute table (PAT). See the exceptions below. Local APIC registers. Additional general purpose registers (R8-R15), XMM registers (XMM8-XMM15), control register, IA32_EFER on Intel 64 processors. The following features are shared by logical processors: Memory type range registers (MTRRs) Whether the following features are shared or duplicated is implementation-specific: IA32_MISC_ENABLE MSR (MSR address 1A0H) Machine check architecture (MCA) MSRs (except for the IA32_MCG_STATUS and IA32_MCG_CAP MSRs) Performance monitoring control and counter MSRs

缓存共享的讨论如下:

如何在多核Intel cpu中共享缓存内存? http://stackoverflow.com/questions/4802565/multiple-threads-and-cpu-cache 多个CPU /内核可以同时访问同一个RAM吗?

英特尔超线程具有比独立内核更好的缓存和管道共享:https://superuser.com/questions/133082/hyper-threading-and-dual-core-whats-the-difference/995858#995858

Linux内核4.2

主要的初始化操作似乎在arch/x86/kernel/smpboot.c。

ARM最小可运行裸金属示例

下面我为QEMU提供了一个最小可运行ARMv8 aarch64的例子:

.global mystart
mystart:
    /* Reset spinlock. */
    mov x0, #0
    ldr x1, =spinlock
    str x0, [x1]

    /* Read cpu id into x1.
     * TODO: cores beyond 4th?
     * Mnemonic: Main Processor ID Register
     */
    mrs x1, mpidr_el1
    ands x1, x1, 3
    beq cpu0_only
cpu1_only:
    /* Only CPU 1 reaches this point and sets the spinlock. */
    mov x0, 1
    ldr x1, =spinlock
    str x0, [x1]
    /* Ensure that CPU 0 sees the write right now.
     * Optional, but could save some useless CPU 1 loops.
     */
    dmb sy
    /* Wake up CPU 0 if it is sleeping on wfe.
     * Optional, but could save power on a real system.
     */
    sev
cpu1_sleep_forever:
    /* Hint CPU 1 to enter low power mode.
     * Optional, but could save power on a real system.
     */
    wfe
    b cpu1_sleep_forever
cpu0_only:
    /* Only CPU 0 reaches this point. */

    /* Wake up CPU 1 from initial sleep!
     * See:https://github.com/cirosantilli/linux-kernel-module-cheat#psci
     */
    /* PCSI function identifier: CPU_ON. */
    ldr w0, =0xc4000003
    /* Argument 1: target_cpu */
    mov x1, 1
    /* Argument 2: entry_point_address */
    ldr x2, =cpu1_only
    /* Argument 3: context_id */
    mov x3, 0
    /* Unused hvc args: the Linux kernel zeroes them,
     * but I don't think it is required.
     */
    hvc 0

spinlock_start:
    ldr x0, spinlock
    /* Hint CPU 0 to enter low power mode. */
    wfe
    cbz x0, spinlock_start

    /* Semihost exit. */
    mov x1, 0x26
    movk x1, 2, lsl 16
    str x1, [sp, 0]
    mov x0, 0
    str x0, [sp, 8]
    mov x1, sp
    mov w0, 0x18
    hlt 0xf000

spinlock:
    .skip 8

GitHub上游。

组装和运行:

aarch64-linux-gnu-gcc \
  -mcpu=cortex-a57 \
  -nostdlib \
  -nostartfiles \
  -Wl,--section-start=.text=0x40000000 \
  -Wl,-N \
  -o aarch64.elf \
  -T link.ld \
  aarch64.S \
;
qemu-system-aarch64 \
  -machine virt \
  -cpu cortex-a57 \
  -d in_asm \
  -kernel aarch64.elf \
  -nographic \
  -semihosting \
  -smp 2 \
;

在本例中,我们将cpu0放入自旋锁循环中,只有当cpu1释放自旋锁时,它才会退出。

在自旋锁之后,CPU 0执行一个半主机退出调用,使QEMU退出。

如果启动QEMU时只有一个CPU -smp 1,那么模拟将永远挂在自旋锁上。

CPU 1被PSCI接口唤醒,更多细节:ARM:启动/唤醒/唤醒其他CPU核心/ ap和通过执行起始地址?

上游版本还进行了一些调整,使其能够在gem5上工作,因此您也可以尝试性能特征。

我还没有在真正的硬件上测试过,所以我不确定它的可移植性。下面的树莓派参考书目可能会感兴趣:

https://github.com/bztsrc/raspi3-tutorial/tree/a3f069b794aeebef633dbe1af3610784d55a0efa/02_multicorec https://github.com/dwelch67/raspberrypi/tree/a09771a1d5a0b53d8e7a461948dc226c5467aeec/multi00 https://github.com/LdB-ECM/Raspberry-Pi/blob/3b628a2c113b3997ffdb408db03093b2953e4961/Multicore/SmartStart64.S https://github.com/LdB-ECM/Raspberry-Pi/blob/3b628a2c113b3997ffdb408db03093b2953e4961/Multicore/SmartStart32.S

本文档提供了一些使用ARM同步原语的指导,您可以使用这些原语在多核上做一些有趣的事情:http://infocenter.arm.com/help/topic/com.arm.doc.dht0008a/DHT0008A_arm_synchronization_primitives.pdf

在Ubuntu 18.10, GCC 8.2.0, Binutils 2.31.1, QEMU 2.12.0上测试。

更方便的可编程性的下一步

前面的例子使用专用指令唤醒辅助CPU并执行基本的内存同步,这是一个良好的开始。

但是为了使多核系统易于编程,例如POSIX pthreads,你还需要进入以下更复杂的主题:

setup interrupts and run a timer that periodically decides which thread will run now. This is known as preemptive multithreading. Such system also needs to save and restore thread registers as they are started and stopped. It is also possible to have non-preemptive multitasking systems, but those might require you to modify your code so that every threads yields (e.g. with a pthread_yield implementation), and it becomes harder to balance workloads. Here are some simplistic bare metal timer examples: x86 PIT deal with memory conflicts. Notably, each thread will need a unique stack if you want to code in C or other high level languages. You could just limit threads to have a fixed maximum stack size, but the nicer way to deal with this is with paging which allows for efficient "unlimited size" stacks. Here is a naive aarch64 baremetal example that would blow up if the stack grows too deep

这些都是使用Linux内核或其他操作系统的好理由:-)

用户域内存同步原语

尽管线程启动/停止/管理通常超出了用户域的范围,但是您可以使用来自用户域线程的汇编指令来同步内存访问,而不需要潜在的更昂贵的系统调用。

当然,您应该更喜欢使用可移植地包装这些低级原语的库。c++标准本身在<mutex>和<atomic>标头上取得了很大的进步,特别是在std::memory_order方面。我不确定它是否涵盖了所有可能的内存语义,但它只是可能。

更微妙的语义与无锁数据结构的上下文中特别相关,在某些情况下可以提供性能优势。要实现这些,您可能需要了解一些不同类型的内存障碍:https://preshing.com/20120710/memory-barriers-are-like-source-control-operations/

例如,Boost在https://www.boost.org/doc/libs/1_63_0/doc/html/lockfree.html上有一些无锁容器实现

这样的用户域指令似乎也用于实现Linux futex系统调用,这是Linux中主要的同步原语之一。Man futex 4.15写道:

The futex() system call provides a method for waiting until a certain condition becomes true. It is typically used as a blocking construct in the context of shared-memory synchronization. When using futexes, the majority of the synchronization operations are performed in user space. A user-space program employs the futex() system call only when it is likely that the program has to block for a longer time until the condition becomes true. Other futex() operations can be used to wake any processes or threads waiting for a particular condition.

系统调用名称本身的意思是“快速用户空间XXX”。

下面是一个最小的无用的c++ x86_64 / aarch64内联汇编示例,它主要是为了好玩而演示这些指令的基本用法:

main.cpp

#include <atomic>
#include <cassert>
#include <iostream>
#include <thread>
#include <vector>

std::atomic_ulong my_atomic_ulong(0);
unsigned long my_non_atomic_ulong = 0;
#if defined(__x86_64__) || defined(__aarch64__)
unsigned long my_arch_atomic_ulong = 0;
unsigned long my_arch_non_atomic_ulong = 0;
#endif
size_t niters;

void threadMain() {
    for (size_t i = 0; i < niters; ++i) {
        my_atomic_ulong++;
        my_non_atomic_ulong++;
#if defined(__x86_64__)
        __asm__ __volatile__ (
            "incq %0;"
            : "+m" (my_arch_non_atomic_ulong)
            :
            :
        );
        // https://github.com/cirosantilli/linux-kernel-module-cheat#x86-lock-prefix
        __asm__ __volatile__ (
            "lock;"
            "incq %0;"
            : "+m" (my_arch_atomic_ulong)
            :
            :
        );
#elif defined(__aarch64__)
        __asm__ __volatile__ (
            "add %0, %0, 1;"
            : "+r" (my_arch_non_atomic_ulong)
            :
            :
        );
        // https://github.com/cirosantilli/linux-kernel-module-cheat#arm-lse
        __asm__ __volatile__ (
            "ldadd %[inc], xzr, [%[addr]];"
            : "=m" (my_arch_atomic_ulong)
            : [inc] "r" (1),
              [addr] "r" (&my_arch_atomic_ulong)
            :
        );
#endif
    }
}

int main(int argc, char **argv) {
    size_t nthreads;
    if (argc > 1) {
        nthreads = std::stoull(argv[1], NULL, 0);
    } else {
        nthreads = 2;
    }
    if (argc > 2) {
        niters = std::stoull(argv[2], NULL, 0);
    } else {
        niters = 10000;
    }
    std::vector<std::thread> threads(nthreads);
    for (size_t i = 0; i < nthreads; ++i)
        threads[i] = std::thread(threadMain);
    for (size_t i = 0; i < nthreads; ++i)
        threads[i].join();
    assert(my_atomic_ulong.load() == nthreads * niters);
    // We can also use the atomics direclty through `operator T` conversion.
    assert(my_atomic_ulong == my_atomic_ulong.load());
    std::cout << "my_non_atomic_ulong " << my_non_atomic_ulong << std::endl;
#if defined(__x86_64__) || defined(__aarch64__)
    assert(my_arch_atomic_ulong == nthreads * niters);
    std::cout << "my_arch_non_atomic_ulong " << my_arch_non_atomic_ulong << std::endl;
#endif
}

GitHub上游。

可能的输出:

my_non_atomic_ulong 15264
my_arch_non_atomic_ulong 15267

从这里我们可以看到,x86 LOCK前缀/ aarch64 LDADD指令使添加操作具有原子性:如果没有它,我们在许多添加操作上都有竞争条件,并且最后的总计数小于同步的20000。

参见:

x86 在x86汇编中,“锁”指令是什么意思? x86暂停指令如何在自旋锁*和*中工作,它可以在其他场景中使用吗? 手臂 LDXR/STXR, LDAXR/STLXR: ARM64: LDXR/STXR vs LDAXR/STLXR LDADD和其他原子v8.1加载修改存储指令:http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0801g/alc1476202791033.html WFE / SVE: ARM中的WFE指令处理 到底什么是std::atomic?

在Ubuntu 19.04 amd64和QEMU aarch64用户模式下测试。

I think the questioner probably wants to make a program run faster by having multiple cores work on it in parallel. That's what I would want anyway but all the answers leave me no wiser. However, I think I get this: You can't synchronize different threads down to instruction execution time accuracy. So you can't get 4 cores to do a multiply on four different array elements in parallel to speed up processing by 4:1. Rather, you have to look at your program as comprising major blocks that execute sequentially like

对一些数据做FFT吗 把结果放到一个矩阵中,然后找出它的特征值和特征向量 根据特征值对后者进行排序 用新的数据重复第一步

What you can do is run step 2 on the results of step 1 while running step one in a different core on new data, and running step 3 on the results of step2 in a different core while step 2 is running on the next data and step 1 is running on the data after that. You can do this in Compaq Visual Fortran and Intel Fortran which is an evolution of CVF by writing three separate programs/ subroutines for the three steps and instead of one "calling" the next it calls an API to start its thread. They can share data by using COMMON which will be COMMON data memory to all threads. You have to study the manual till your head hurts and experiment until you get it to work but I have succeeded once at least.

这不是对问题的直接回答,但这是对评论中出现的一个问题的回答。本质上,问题是硬件对多核操作提供了什么支持,即同时运行多个软件线程的能力,而无需在它们之间进行软件上下文切换。(有时称为SMP系统)。

Nicholas Flynt had it right, at least regarding x86. In a multi-core environment (Hyper-threading, multi-core or multi-processor), the Bootstrap core (usually hardware-thread (aka logical core) 0 in core 0 in processor 0) starts up fetching code from address 0xfffffff0. All the other cores (hardware threads) start up in a special sleep state called Wait-for-SIPI. As part of its initialization, the primary core sends a special inter-processor-interrupt (IPI) over the APIC called a SIPI (Startup IPI) to each core that is in WFS. The SIPI contains the address from which that core should start fetching code.

这种机制允许每个核心从不同的地址执行代码。所需要的只是为每个硬件核心提供软件支持,以便建立自己的表和消息队列。

操作系统使用它们来执行软件任务的实际多线程调度。(一个正常的操作系统只需要在启动时启动一次其他内核,除非你是热插拔cpu,例如在虚拟机中。这与启动或将软件线程迁移到这些内核是分开的。每个核心都在运行内核,如果没有其他事情要做,内核就会调用sleep函数来等待中断。)

就实际的程序集而言,正如Nicholas所写的,单线程应用程序集和多线程应用程序集之间没有区别。每个核都有自己的寄存器集(执行上下文),因此编写:

mov edx, 0

将只更新当前运行线程的EDX。没有办法使用单一的汇编指令在另一个处理器上修改EDX。您需要某种类型的系统调用来要求操作系统告诉另一个线程运行将更新自己的EDX的代码。