最近我听到一些人说,在Linux中,使用进程几乎总是比使用线程更好,因为Linux在处理进程方面非常高效,而且与线程相关的问题太多了(比如锁)。然而,我对此持怀疑态度,因为在某些情况下,线程似乎可以带来相当大的性能提升。

因此,我的问题是,当遇到线程和进程都可以很好地处理的情况时,我应该使用进程还是线程?例如,如果我正在编写一个web服务器,我应该使用进程还是线程(或组合)?


当前回答

Linux(实际上还有Unix)为您提供了第三种选择。

选项1 -流程

创建一个独立的可执行文件来处理应用程序的某些部分(或所有部分),并为每个进程分别调用它,例如,程序运行自己的副本来委托任务。

选项2 -线程

创建一个独立的可执行文件,它由一个线程启动,并创建额外的线程来执行一些任务

选项3 -分叉

仅在Linux/Unix下可用,这有点不同。fork进程实际上是拥有自己地址空间的进程——子进程(通常)无法影响其父进程或兄弟进程的地址空间(不像线程)——因此您获得了额外的健壮性。

但是,内存页不是复制的,它们是写时复制的,因此通常使用的内存比您想象的要少。

考虑一个web服务器程序,它包含两个步骤:

读取配置和运行时数据 服务页面请求

如果您使用线程,第1步将完成一次,第2步将在多个线程中完成。如果您使用“传统”流程,那么每个流程都需要重复步骤1和步骤2,存储配置和运行时数据的内存也需要重复。如果您使用fork(),那么您可以执行第1步,然后fork(),将运行时数据和配置保留在内存中,不受影响,不复制。

所以实际上有三种选择。

其他回答

更复杂的是,还有线程本地存储和Unix共享内存。

Thread-local storage allows each thread to have a separate instance of global objects. The only time I've used it was when constructing an emulation environment on linux/windows, for application code that ran in an RTOS. In the RTOS each task was a process with it's own address space, in the emulation environment, each task was a thread (with a shared address space). By using TLS for things like singletons, we were able to have a separate instance for each thread, just like under the 'real' RTOS environment.

共享内存(显然)可以为您带来让多个进程访问相同内存的性能优势,但代价是必须正确地同步进程。一种方法是让一个进程在共享内存中创建一个数据结构,然后通过传统的进程间通信(如命名管道)向该结构发送句柄。

Linux使用1-1线程模型,(对内核来说)没有进程和线程的区别——一切都只是一个可运行的任务。*

在Linux上,系统调用clone克隆一个任务,共享级别可配置,其中包括:

CLONE_FILES:共享相同的文件描述符表(而不是创建一个副本) CLONE_PARENT:不要在新任务和旧任务之间建立父子关系(否则,child的getppid() = parent的getpid()) CLONE_VM:共享相同的内存空间(而不是创建COW副本)

Fork()调用clone(共享最少),pthread_create()调用clone(共享最多)。**

fork的成本比pthread_creation略高,因为需要复制表并为内存创建COW映射,但是Linux内核开发人员已经尝试(并成功)将这些成本最小化。

如果任务共享相同的内存空间和不同的表,那么它们之间的切换将比不共享的任务稍微便宜一些,因为数据可能已经加载到缓存中了。然而,即使没有任何共享,切换任务仍然非常快——这是Linux内核开发人员试图确保(并成功确保)的另一件事。

事实上,如果您在多处理器系统上,不共享实际上可能有利于性能:如果每个任务都在不同的处理器上运行,同步共享内存的成本很高。


*简化。CLONE_THREAD导致信号传递被共享(这需要共享信号处理程序表的CLONE_SIGHAND)。

* *简化。SYS_fork和SYS_clone系统调用都存在,但是在内核中,SYS_fork和SYS_clone都是对同一个do_fork函数的非常薄的包装,而do_fork函数本身也是对copy_process的薄包装。是的,进程、线程和任务这三个术语在Linux内核中是可以互换使用的……

I think everyone has done a great job responding to your question. I'm just adding more information about thread versus process in Linux to clarify and summarize some of the previous responses in context of kernel. So, my response is in regarding to kernel specific code in Linux. According to Linux Kernel documentation, there is no clear distinction between thread versus process except thread uses shared virtual address space unlike process. Also note, the Linux Kernel uses the term "task" to refer to process and thread in general.

没有实现进程或线程的内部结构,而是有一个结构体task_struct,它描述了一个称为task的抽象调度单元。

另外,根据Linus Torvalds的说法,你根本不应该考虑进程和线程,因为这太有限了,唯一的区别是COE或执行上下文在“从父地址空间分离”或共享地址空间方面的区别。事实上,他在这里用了一个web服务器的例子来说明他的观点(强烈推荐阅读)。

完全归功于linux内核文档

在大多数情况下,我更喜欢进程而不是线程。 当您有一个相对较小的任务(每个划分的任务单元占用的进程开销>>时间),并且需要在它们之间共享内存时,线程可能会很有用。想象一个大数组。 另外(离题),请注意,如果您的CPU利用率是100%或接近100%,那么多线程或处理将没有任何好处。(事实上情况会更糟)

多线程是为受虐狂准备的。:)

If you are concerned about an environment where you are constantly creating threads/forks, perhaps like a web server handling requests, you can pre-fork processes, hundreds if necessary. Since they are Copy on Write and use the same memory until a write occurs, it's very fast. They can all block, listening on the same socket and the first one to accept an incoming TCP connection gets to run with it. With g++ you can also assign functions and variables to be closely placed in memory (hot segments) to ensure when you do write to memory, and cause an entire page to be copied at least subsequent write activity will occur on the same page. You really have to use a profiler to verify that kind of stuff but if you are concerned about performance, you should be doing that anyway.

Development time of threaded apps is 3x to 10x times longer due to the subtle interaction on shared objects, threading "gotchas" you didn't think of, and very hard to debug because you cannot reproduce thread interaction problems at will. You may have to do all sort of performance killing checks like having invariants in all your classes that are checked before and after every function and you halt the process and load the debugger if something isn't right. Most often it's embarrassing crashes that occur during production and you have to pore through a core dump trying to figure out which threads did what. Frankly, it's not worth the headache when forking processes is just as fast and implicitly thread safe unless you explicitly share something. At least with explicit sharing you know exactly where to look if a threading style problem occurs.

如果性能如此重要,那就增加另一台计算机和负载平衡。对于开发人员调试一个多线程应用程序的成本,即使是由一个有经验的多线程程序编写的应用程序,你可能会买4块40核的英特尔主板,每块都有64g内存。

That being said, there are asymmetric cases where parallel processing isn't appropriate, like, you want a foreground thread to accept user input and show button presses immediately, without waiting for some clunky back end GUI to keep up. Sexy use of threads where multiprocessing isn't geometrically appropriate. Many things like that just variables or pointers. They aren't "handles" that can be shared in a fork. You have to use threads. Even if you did fork, you'd be sharing the same resource and subject to threading style issues.