我试图理解多处理相对于线程的优势。我知道多处理绕过了全局解释器锁,但是还有什么其他的优势,线程不能做同样的事情吗?
当前回答
正如问题中提到的,Python中的多处理是实现真正并行的唯一方法。多线程无法实现这一点,因为GIL阻止线程并行运行。
As a consequence, threading may not always be useful in Python, and in fact, may even result in worse performance depending on what you are trying to achieve. For example, if you are performing a CPU-bound task such as decompressing gzip files or 3D-rendering (anything CPU intensive) then threading may actually hinder your performance rather than help. In such a case, you would want to use Multiprocessing as only this method actually runs in parallel and will help distribute the weight of the task at hand. There could be some overhead to this since Multiprocessing involves copying the memory of a script into each subprocess which may cause issues for larger-sized applications.
然而,当您的任务是io绑定时,多线程就变得有用了。例如,如果您的大部分任务涉及等待api调用,那么您将使用多线程,因为为什么不在等待时在另一个线程中启动另一个请求,而不是让您的CPU无所事事。
博士TL;
多线程是并发的,用于io绑定的任务 Multiprocessing实现了真正的并行,用于cpu受限的任务
其他回答
关键的优势是隔离。进程崩溃不会导致其他进程崩溃,而线程崩溃可能会对其他线程造成严重破坏。
正如问题中提到的,Python中的多处理是实现真正并行的唯一方法。多线程无法实现这一点,因为GIL阻止线程并行运行。
As a consequence, threading may not always be useful in Python, and in fact, may even result in worse performance depending on what you are trying to achieve. For example, if you are performing a CPU-bound task such as decompressing gzip files or 3D-rendering (anything CPU intensive) then threading may actually hinder your performance rather than help. In such a case, you would want to use Multiprocessing as only this method actually runs in parallel and will help distribute the weight of the task at hand. There could be some overhead to this since Multiprocessing involves copying the memory of a script into each subprocess which may cause issues for larger-sized applications.
然而,当您的任务是io绑定时,多线程就变得有用了。例如,如果您的大部分任务涉及等待api调用,那么您将使用多线程,因为为什么不在等待时在另一个线程中启动另一个请求,而不是让您的CPU无所事事。
博士TL;
多线程是并发的,用于io绑定的任务 Multiprocessing实现了真正的并行,用于cpu受限的任务
Python文档引用
这个答案的规范版本现在是双重问题:线程模块和多处理模块之间有什么区别?
我已经突出显示了Python文档中关于进程vs线程和GIL的关键引用:什么是CPython中的全局解释器锁(GIL) ?
进程与线程实验
为了更具体地展示差异,我做了一些基准测试。
在基准测试中,我对8超线程CPU上不同数量的线程进行了CPU和IO限制。每个线程提供的功总是相同的,因此线程越多,提供的总功就越多。
结果如下:
图数据。
结论:
对于CPU约束的工作,多处理总是更快,可能是由于GIL IO绑定工作。两者的速度完全相同 线程只能扩展到大约4倍,而不是预期的8倍,因为我使用的是8超线程机器。 与此相比,C POSIX cpu绑定的工作达到了预期的8倍加速:'real', 'user'和'sys'在time(1)的输出中是什么意思? 我不知道这是什么原因,肯定有其他Python的低效率因素在起作用。
测试代码:
#!/usr/bin/env python3
import multiprocessing
import threading
import time
import sys
def cpu_func(result, niters):
'''
A useless CPU bound function.
'''
for i in range(niters):
result = (result * result * i + 2 * result * i * i + 3) % 10000000
return result
class CpuThread(threading.Thread):
def __init__(self, niters):
super().__init__()
self.niters = niters
self.result = 1
def run(self):
self.result = cpu_func(self.result, self.niters)
class CpuProcess(multiprocessing.Process):
def __init__(self, niters):
super().__init__()
self.niters = niters
self.result = 1
def run(self):
self.result = cpu_func(self.result, self.niters)
class IoThread(threading.Thread):
def __init__(self, sleep):
super().__init__()
self.sleep = sleep
self.result = self.sleep
def run(self):
time.sleep(self.sleep)
class IoProcess(multiprocessing.Process):
def __init__(self, sleep):
super().__init__()
self.sleep = sleep
self.result = self.sleep
def run(self):
time.sleep(self.sleep)
if __name__ == '__main__':
cpu_n_iters = int(sys.argv[1])
sleep = 1
cpu_count = multiprocessing.cpu_count()
input_params = [
(CpuThread, cpu_n_iters),
(CpuProcess, cpu_n_iters),
(IoThread, sleep),
(IoProcess, sleep),
]
header = ['nthreads']
for thread_class, _ in input_params:
header.append(thread_class.__name__)
print(' '.join(header))
for nthreads in range(1, 2 * cpu_count):
results = [nthreads]
for thread_class, work_size in input_params:
start_time = time.time()
threads = []
for i in range(nthreads):
thread = thread_class(work_size)
threads.append(thread)
thread.start()
for i, thread in enumerate(threads):
thread.join()
results.append(time.time() - start_time)
print(' '.join('{:.6e}'.format(result) for result in results))
相同目录上的GitHub上游+绘图代码。
在Ubuntu 18.10, Python 3.6.7,联想ThinkPad P51笔记本电脑上测试,CPU:英特尔酷睿i7-7820HQ CPU(4核/ 8线程),RAM: 2倍三星M471A2K43BB1-CRC(2倍16GiB), SSD:三星MZVLB512HAJQ-000L7 (3000 MB/s)。
可视化给定时间哪些线程正在运行
这篇文章https://rohanvarma.me/GIL/告诉我,你可以运行一个回调每当线程调度与目标=参数的线程。线程和multiprocessing.Process。
这允许我们准确地查看每次运行的线程。当这完成后,我们会看到(我制作了这张特殊的图表):
+--------------------------------------+
+ Active threads / processes +
+-----------+--------------------------------------+
|Thread 1 |******** ************ |
| 2 | ***** *************|
+-----------+--------------------------------------+
|Process 1 |*** ************** ****** **** |
| 2 |** **** ****** ** ********* **********|
+-----------+--------------------------------------+
+ Time --> +
+--------------------------------------+
这将表明:
线程由GIL完全序列化 进程可以并行运行
As I learnd in university most of the answers above are right. In PRACTISE on different platforms (always using python) spawning multiple threads ends up like spawning one process. The difference is the multiple cores share the load instead of only 1 core processing everything at 100%. So if you spawn for example 10 threads on a 4 core pc, you will end up getting only the 25% of the cpus power!! And if u spawn 10 processes u will end up with the cpu processing at 100% (if u dont have other limitations). Im not a expert in all the new technologies. Im answering with own real experience background
线程模块使用线程,多处理模块使用进程。不同之处在于线程在相同的内存空间中运行,而进程有单独的内存。这使得在多进程之间共享对象变得有点困难。由于线程使用相同的内存,必须采取预防措施,否则两个线程将同时写入同一内存。这就是全局解释器锁的作用。
生成进程比生成线程要慢一些。
推荐文章
- 证书验证失败:无法获得本地颁发者证书
- 当使用pip3安装包时,“Python中的ssl模块不可用”
- 无法切换Python与pyenv
- Python if not == vs if !=
- 如何从scikit-learn决策树中提取决策规则?
- 为什么在Mac OS X v10.9 (Mavericks)的终端中apt-get功能不起作用?
- 将旋转的xtick标签与各自的xtick对齐
- 为什么元组可以包含可变项?
- 如何合并字典的字典?
- 如何创建类属性?
- 不区分大小写的“in”
- 在Python中获取迭代器中的元素个数
- 自动化invokerrequired代码模式
- 解析日期字符串并更改格式
- 使用try和。Python中的if