下面的函数foo返回一个字符串'foo'。我如何才能获得从线程的目标返回的值'foo' ?

from threading import Thread

def foo(bar):
    print('hello {}'.format(bar))
    return 'foo'
    
thread = Thread(target=foo, args=('world!',))
thread.start()
return_value = thread.join()

上面所示的“一种明显的方法”不起作用:thread.join()返回None。


当前回答

你可以使用ThreadPool()的pool.apply_async()来返回test()的值,如下所示:

from multiprocessing.pool import ThreadPool

def test(num1, num2):
    return num1 + num2

pool = ThreadPool(processes=1) # Here
result = pool.apply_async(test, (2, 3)) # Here
print(result.get()) # 5

并且,你也可以使用concurrent.futures.ThreadPoolExecutor()的submit()来返回test()的值,如下所示:

from concurrent.futures import ThreadPoolExecutor

def test(num1, num2):
    return num1 + num2

with ThreadPoolExecutor(max_workers=1) as executor:
    future = executor.submit(test, 2, 3) # Here
print(future.result()) # 5

并且,代替返回,你可以使用数组结果,如下所示:

from threading import Thread

def test(num1, num2, r):
    r[0] = num1 + num2 # Instead of "return"

result = [None] # Here

thread = Thread(target=test, args=(2, 3, result))
thread.start()
thread.join()
print(result[0]) # 5

而不是返回,你也可以使用队列结果,如下所示:

from threading import Thread
import queue

def test(num1, num2, q):
    q.put(num1 + num2) # Instead of "return" 

queue = queue.Queue() # Here

thread = Thread(target=test, args=(2, 3, queue))
thread.start()
thread.join()
print(queue.get()) # '5'

其他回答

这是一个很老的问题,但我想分享一个简单的解决方案,它对我的开发过程有帮助。

这个答案背后的方法论是这样一个事实,即“新的”目标函数,内部是将原始函数的结果(通过__init__函数传递)通过所谓的闭包分配给包装器的结果实例属性。

这允许包装器类保留返回值以供调用者随时访问。

注意:这个方法不需要使用线程的任何mangded方法或私有方法。线程类,虽然没有考虑屈服函数(OP没有提到屈服函数)。

享受吧!

from threading import Thread as _Thread


class ThreadWrapper:
    def __init__(self, target, *args, **kwargs):
        self.result = None
        self._target = self._build_threaded_fn(target)
        self.thread = _Thread(
            target=self._target,
            *args,
            **kwargs
        )

    def _build_threaded_fn(self, func):
        def inner(*args, **kwargs):
            self.result = func(*args, **kwargs)
        return inner

此外,你可以用下面的代码运行pytest(假设你已经安装了它)来演示结果:

import time
from commons import ThreadWrapper


def test():

    def target():
        time.sleep(1)
        return 'Hello'

    wrapper = ThreadWrapper(target=target)
    wrapper.thread.start()

    r = wrapper.result
    assert r is None

    time.sleep(2)

    r = wrapper.result
    assert r == 'Hello'

join总是返回None,我认为你应该子类化Thread来处理返回代码等。

Jake的回答很好,但如果您不想使用线程池(您不知道需要多少线程,但可以根据需要创建它们),那么在线程之间传输信息的一个好方法是内置的Queue。队列类,因为它提供线程安全性。

我创建了以下装饰器,使其以类似于线程池的方式工作:

def threaded(f, daemon=False):
    import Queue

    def wrapped_f(q, *args, **kwargs):
        '''this function calls the decorated function and puts the 
        result in a queue'''
        ret = f(*args, **kwargs)
        q.put(ret)

    def wrap(*args, **kwargs):
        '''this is the function returned from the decorator. It fires off
        wrapped_f in a new thread and returns the thread object with
        the result queue attached'''

        q = Queue.Queue()

        t = threading.Thread(target=wrapped_f, args=(q,)+args, kwargs=kwargs)
        t.daemon = daemon
        t.start()
        t.result_queue = q        
        return t

    return wrap

然后你就把它用作:

@threaded
def long_task(x):
    import time
    x = x + 5
    time.sleep(5)
    return x

# does not block, returns Thread object
y = long_task(10)
print y

# this blocks, waiting for the result
result = y.result_queue.get()
print result

装饰函数每次被调用时都会创建一个新线程,并返回一个thread对象,其中包含将接收结果的队列。

更新

自从我发布这个答案已经有一段时间了,但它仍然得到了观看,所以我想我应该更新它,以反映我在新版本的Python中这样做的方式:

Python 3.2并发添加。期货模块,为并行任务提供高级接口。它提供了ThreadPoolExecutor和ProcessPoolExecutor,因此您可以使用具有相同api的线程或进程池。

该api的一个好处是将任务提交给Executor将返回一个Future对象,该对象将以您提交的可调用对象的返回值结束。

这使得附加队列对象成为不必要的,这大大简化了装饰器:

_DEFAULT_POOL = ThreadPoolExecutor()

def threadpool(f, executor=None):
    @wraps(f)
    def wrap(*args, **kwargs):
        return (executor or _DEFAULT_POOL).submit(f, *args, **kwargs)

    return wrap

如果没有传入,将使用默认的模块线程池执行器。

用法和前面的非常相似:

@threadpool
def long_task(x):
    import time
    x = x + 5
    time.sleep(5)
    return x

# does not block, returns Future object
y = long_task(10)
print y

# this blocks, waiting for the result
result = y.result()
print result

如果您使用的是Python 3.4+,那么使用此方法(以及一般的Future对象)的一个非常好的特性是可以将返回的Future对象包装起来以将其转换为asyncio。使用asyncio.wrap_future。这使得它很容易与协程一起工作:

result = await asyncio.wrap_future(long_task(10))

如果您不需要访问底层并发。对象,你可以在装饰器中包含wrap:

_DEFAULT_POOL = ThreadPoolExecutor()

def threadpool(f, executor=None):
    @wraps(f)
    def wrap(*args, **kwargs):
        return asyncio.wrap_future((executor or _DEFAULT_POOL).submit(f, *args, **kwargs))

    return wrap

然后,当你需要将cpu密集型代码或阻塞代码从事件循环线程中推出时,你可以将它放在装饰函数中:

@threadpool
def some_long_calculation():
    ...

# this will suspend while the function is executed on a threadpool
result = await some_long_calculation()

我偷了kindall的答案,稍微整理了一下。

关键部分是为join()添加*args和**kwargs,以便处理超时

class threadWithReturn(Thread):
    def __init__(self, *args, **kwargs):
        super(threadWithReturn, self).__init__(*args, **kwargs)
        
        self._return = None
    
    def run(self):
        if self._Thread__target is not None:
            self._return = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
    
    def join(self, *args, **kwargs):
        super(threadWithReturn, self).join(*args, **kwargs)
        
        return self._return

更新答案如下

这是我得到最多好评的答案,所以我决定更新可以在py2和py3上运行的代码。

此外,我看到许多对这个问题的回答都显示出对Thread.join()缺乏理解。有些完全不能处理timeout参数。但是当你有(1)一个可以返回None的目标函数并且(2)你也将timeout参数传递给join()时,还有一种极端情况你应该注意。请参阅“TEST 4”以理解这个极端情况。

ThreadWithReturn类,用于py2和py3:

import sys
from threading import Thread
from builtins import super    # https://stackoverflow.com/a/30159479

_thread_target_key, _thread_args_key, _thread_kwargs_key = (
    ('_target', '_args', '_kwargs')
    if sys.version_info >= (3, 0) else
    ('_Thread__target', '_Thread__args', '_Thread__kwargs')
)

class ThreadWithReturn(Thread):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._return = None
    
    def run(self):
        target = getattr(self, _thread_target_key)
        if target is not None:
            self._return = target(
                *getattr(self, _thread_args_key),
                **getattr(self, _thread_kwargs_key)
            )
    
    def join(self, *args, **kwargs):
        super().join(*args, **kwargs)
        return self._return

一些示例测试如下所示:

import time, random

# TEST TARGET FUNCTION
def giveMe(arg, seconds=None):
    if not seconds is None:
        time.sleep(seconds)
    return arg

# TEST 1
my_thread = ThreadWithReturn(target=giveMe, args=('stringy',))
my_thread.start()
returned = my_thread.join()
# (returned == 'stringy')

# TEST 2
my_thread = ThreadWithReturn(target=giveMe, args=(None,))
my_thread.start()
returned = my_thread.join()
# (returned is None)

# TEST 3
my_thread = ThreadWithReturn(target=giveMe, args=('stringy',), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=2)
# (returned is None) # because join() timed out before giveMe() finished

# TEST 4
my_thread = ThreadWithReturn(target=giveMe, args=(None,), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=random.randint(1, 10))

你能确定我们在测试4中可能遇到的极端情况吗?

问题是我们期望giveMe()返回None(参见TEST 2),但我们也期望join()在超时时返回None。

None表示:

(1)这就是giveMe()返回的,或者

(2) join()超时

这个例子很简单,因为我们知道giveMe()总是返回None。但在真实的实例中(目标可能返回None或其他内容),我们希望显式地检查发生了什么。

下面是如何解决这种极端情况:

# TEST 4
my_thread = ThreadWithReturn(target=giveMe, args=(None,), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=random.randint(1, 10))

if my_thread.isAlive():
    # returned is None because join() timed out
    # this also means that giveMe() is still running in the background
    pass
    # handle this based on your app's logic
else:
    # join() is finished, and so is giveMe()
    # BUT we could also be in a race condition, so we need to update returned, just in case
    returned = my_thread.join()

在Python 3.2+中,stdlib concurrent。futures模块为线程提供了一个更高级别的API,包括将返回值或异常从工作线程传递回主线程:

import concurrent.futures

def foo(bar):
    print('hello {}'.format(bar))
    return 'foo'

with concurrent.futures.ThreadPoolExecutor() as executor:
    future = executor.submit(foo, 'world!')
    return_value = future.result()
    print(return_value)