最近我听到一些人说,在Linux中,使用进程几乎总是比使用线程更好,因为Linux在处理进程方面非常高效,而且与线程相关的问题太多了(比如锁)。然而,我对此持怀疑态度,因为在某些情况下,线程似乎可以带来相当大的性能提升。

因此,我的问题是,当遇到线程和进程都可以很好地处理的情况时,我应该使用进程还是线程?例如,如果我正在编写一个web服务器,我应该使用进程还是线程(或组合)?


当前回答

更复杂的是,还有线程本地存储和Unix共享内存。

Thread-local storage allows each thread to have a separate instance of global objects. The only time I've used it was when constructing an emulation environment on linux/windows, for application code that ran in an RTOS. In the RTOS each task was a process with it's own address space, in the emulation environment, each task was a thread (with a shared address space). By using TLS for things like singletons, we were able to have a separate instance for each thread, just like under the 'real' RTOS environment.

共享内存(显然)可以为您带来让多个进程访问相同内存的性能优势,但代价是必须正确地同步进程。一种方法是让一个进程在共享内存中创建一个数据结构,然后通过传统的进程间通信(如命名管道)向该结构发送句柄。

其他回答

你的任务有多紧密耦合?

如果它们可以彼此独立,那么就使用流程。如果它们相互依赖,则使用线程。这样,您就可以终止并重新启动坏进程,而不会影响其他任务的操作。

如果您希望尽可能地创建一个纯a进程,您可以使用clone()并设置所有克隆标志。(或者调用fork()来节省打字的时间)

如果你想创建一个纯粹的线程,你可以使用clone()并清除所有的clone标志(或者节省你自己的输入工作并调用pthread_create())

有28个标志指示资源共享的级别。这意味着你可以创建超过2.68亿种类型的任务,这取决于你想分享什么。

这就是我们所说的Linux不区分进程和线程,而是指程序中的任何控制流都是任务的意思。不区分两者的理由是,嗯,并不是唯一定义了超过2.68亿种口味!

因此,做出是使用进程还是线程的“完美决定”实际上就是决定克隆28种资源中的哪一种。

如果你需要共享资源,你真的应该使用线程。

还要考虑这样一个事实:线程之间的上下文切换比进程之间的上下文切换代价要小得多。

我认为没有理由明确地使用单独的进程,除非你有一个很好的理由这样做(安全,经过验证的性能测试,等等……)

从前有Unix,在这个好的旧Unix中,进程有很多开销,所以一些聪明的人所做的是创建线程,这些线程将与父进程共享相同的地址空间,他们只需要减少上下文切换,这将使上下文切换更有效。

在当代的Linux (2.6.x)中,进程的上下文切换和线程的上下文切换在性能上没有太大的区别(只有MMU的东西对线程来说是额外的)。 共享地址空间存在问题,这意味着线程中的错误指针可能破坏同一地址空间中的父进程或另一个线程的内存。

一个进程受到MMU的保护,所以一个错误的指针只会导致一个信号11,而不会损坏。

我通常会使用进程(在Linux中没有太多的上下文切换开销,但是由于MMU的内存保护),但是如果我需要一个实时调度器类,则会使用pthreads,这完全是另一回事。

为什么你认为线程在Linux上有这么大的性能提升?你有这方面的数据吗,还是说这只是一个神话?

多线程是为受虐狂准备的。:)

If you are concerned about an environment where you are constantly creating threads/forks, perhaps like a web server handling requests, you can pre-fork processes, hundreds if necessary. Since they are Copy on Write and use the same memory until a write occurs, it's very fast. They can all block, listening on the same socket and the first one to accept an incoming TCP connection gets to run with it. With g++ you can also assign functions and variables to be closely placed in memory (hot segments) to ensure when you do write to memory, and cause an entire page to be copied at least subsequent write activity will occur on the same page. You really have to use a profiler to verify that kind of stuff but if you are concerned about performance, you should be doing that anyway.

Development time of threaded apps is 3x to 10x times longer due to the subtle interaction on shared objects, threading "gotchas" you didn't think of, and very hard to debug because you cannot reproduce thread interaction problems at will. You may have to do all sort of performance killing checks like having invariants in all your classes that are checked before and after every function and you halt the process and load the debugger if something isn't right. Most often it's embarrassing crashes that occur during production and you have to pore through a core dump trying to figure out which threads did what. Frankly, it's not worth the headache when forking processes is just as fast and implicitly thread safe unless you explicitly share something. At least with explicit sharing you know exactly where to look if a threading style problem occurs.

如果性能如此重要,那就增加另一台计算机和负载平衡。对于开发人员调试一个多线程应用程序的成本,即使是由一个有经验的多线程程序编写的应用程序,你可能会买4块40核的英特尔主板,每块都有64g内存。

That being said, there are asymmetric cases where parallel processing isn't appropriate, like, you want a foreground thread to accept user input and show button presses immediately, without waiting for some clunky back end GUI to keep up. Sexy use of threads where multiprocessing isn't geometrically appropriate. Many things like that just variables or pointers. They aren't "handles" that can be shared in a fork. You have to use threads. Even if you did fork, you'd be sharing the same resource and subject to threading style issues.