一次又一次,我看到它说使用async-await不会创建任何额外的线程。这是没有道理的,因为计算机一次可以做多件事的唯一方法是

实际上同时做多件事(并行执行,使用多个处理器) 通过调度任务并在它们之间切换来模拟它(做一点a,一点B,一点a,等等)。

因此,如果async-await都不做这些,那么它如何使应用程序具有响应性呢?如果只有1个线程,那么调用任何方法都意味着在执行其他操作之前等待该方法完成,并且该方法中的方法必须在继续执行之前等待结果,以此类推。


当前回答

计算机能同时做多件事的唯一方法是(1)实际上同时做多件事,(2)通过调度任务和在任务之间切换来模拟它。如果async-await两者都不做

It's not that await does neither of those. Remember, the purpose of await is not to make synchronous code magically asynchronous. It's to enable using the same techniques we use for writing synchronous code when calling into asynchronous code. Await is about making the code that uses high latency operations look like code that uses low latency operations. Those high latency operations might be on threads, they might be on special purpose hardware, they might be tearing their work up into little pieces and putting it in the message queue for processing by the UI thread later. They're doing something to achieve asynchrony, but they are the ones that are doing it. Await just lets you take advantage of that asynchrony.

Also, I think you are missing a third option. We old people -- kids today with their rap music should get off my lawn, etc -- remember the world of Windows in the early 1990s. There were no multi-CPU machines and no thread schedulers. You wanted to run two Windows apps at the same time, you had to yield. Multitasking was cooperative. The OS tells a process that it gets to run, and if it is ill-behaved, it starves all the other processes from being served. It runs until it yields, and somehow it has to know how to pick up where it left off the next time the OS hands control back to it. Single-threaded asynchronous code is a lot like that, with "await" instead of "yield". Awaiting means "I'm going to remember where I left off here, and let someone else run for a while; call me back when the task I'm waiting on is complete, and I'll pick up where I left off." I think you can see how that makes apps more responsive, just as it did in the Windows 3 days.

调用任何方法都意味着等待该方法完成

这就是你所缺少的关键。方法可以在其工作完成之前返回。这就是异步的本质。方法返回,它返回一个任务,表示“这项工作正在进行中;完工后告诉我该做什么。”方法的工作没有完成,即使它已经返回。

在使用await操作符之前,您必须编写看起来像意大利面穿过瑞士奶酪的代码,以处理在完成之后还有工作要做,但返回和完成不同步的情况。Await允许您编写看起来像返回和完成是同步的代码,而实际上它们并没有同步。

其他回答

我在我的博客文章《There Is No Thread》中详细解释了这一点。

In summary, modern I/O systems make heavy use of DMA (Direct Memory Access). There are special, dedicated processors on network cards, video cards, HDD controllers, serial/parallel ports, etc. These processors have direct access to the memory bus, and handle reading/writing completely independently of the CPU. The CPU just needs to notify the device of the location in memory containing the data, and then can do its own thing until the device raises an interrupt notifying the CPU that the read/write is complete.

一旦操作在运行中,CPU就没有工作要做,因此没有线程。

我真的很高兴有人问这个问题,因为很长一段时间以来,我也认为线程对于并发性是必要的。当我第一次看到事件循环时,我以为它们是谎言。我对自己说:“如果这段代码在一个线程中运行,它就不可能是并发的”。请记住,这是在我已经经历了理解并发性和并行性之间区别的斗争之后。

经过我自己的研究,我终于找到了缺失的部分:select()。具体来说,IO多路复用,由不同的内核以不同的名称实现:select(), poll(), epoll(), kqueue()。这些是系统调用,尽管实现细节不同,但允许您传入一组文件描述符进行监视。然后,您可以进行另一个调用,该调用将阻塞,直到被监视的文件描述符之一发生变化。

因此,可以等待一组IO事件(主事件循环),处理第一个完成的事件,然后将控制权交还给事件循环。清洗并重复。

这是如何工作的呢?简而言之,这是内核和硬件级的魔力。计算机中除了CPU之外还有许多组件,这些组件可以并行工作。内核可以控制这些设备,并直接与它们通信以接收特定的信号。

这些IO多路复用系统调用是单线程事件循环(如node.js或Tornado)的基本构建块。当您等待一个函数时,您正在观察某个事件(该函数的完成),然后将控制权交还给主事件循环。当您正在观看的事件完成时,函数(最终)从它停止的地方开始。允许像这样暂停和恢复计算的函数称为协程。

实际上,异步等待链是由CLR编译器生成的状态机。

async await使用的线程是TPL使用线程池执行任务的线程。

应用程序没有被阻塞的原因是状态机可以决定执行哪个协同例程、重复、检查并再次决定。

进一步阅读:

异步和等待生成什么?

异步等待和生成的状态机

异步c#和f# (III.):它是如何工作的?-托马斯·佩特里塞克

编辑:

好的。看来我的阐述是不正确的。然而,我必须指出,状态机是异步等待的重要资产。即使你采用异步I/O,你仍然需要一个助手来检查操作是否完成,因此我们仍然需要一个状态机,并确定哪些例程可以一起异步执行。

我不打算和Eric Lippert或者Lasse V. Karlsen等人竞争,我只是想让大家注意这个问题的另一个方面,我想这个问题没有明确提到。

单独使用await并不能让你的应用神奇地做出响应。如果不管你在方法中做什么,你正在等待的UI线程阻塞,它仍然会阻塞你的UI,就像不可等待的版本一样。

你必须编写你的awaitable方法,以便它产生一个新线程或使用一个完成端口之类的东西(它将在当前线程中返回执行,并在完成端口收到信号时调用其他东西来继续)。但这部分在其他答案中有很好的解释。

计算机能同时做多件事的唯一方法是(1)实际上同时做多件事,(2)通过调度任务和在任务之间切换来模拟它。如果async-await两者都不做

It's not that await does neither of those. Remember, the purpose of await is not to make synchronous code magically asynchronous. It's to enable using the same techniques we use for writing synchronous code when calling into asynchronous code. Await is about making the code that uses high latency operations look like code that uses low latency operations. Those high latency operations might be on threads, they might be on special purpose hardware, they might be tearing their work up into little pieces and putting it in the message queue for processing by the UI thread later. They're doing something to achieve asynchrony, but they are the ones that are doing it. Await just lets you take advantage of that asynchrony.

Also, I think you are missing a third option. We old people -- kids today with their rap music should get off my lawn, etc -- remember the world of Windows in the early 1990s. There were no multi-CPU machines and no thread schedulers. You wanted to run two Windows apps at the same time, you had to yield. Multitasking was cooperative. The OS tells a process that it gets to run, and if it is ill-behaved, it starves all the other processes from being served. It runs until it yields, and somehow it has to know how to pick up where it left off the next time the OS hands control back to it. Single-threaded asynchronous code is a lot like that, with "await" instead of "yield". Awaiting means "I'm going to remember where I left off here, and let someone else run for a while; call me back when the task I'm waiting on is complete, and I'll pick up where I left off." I think you can see how that makes apps more responsive, just as it did in the Windows 3 days.

调用任何方法都意味着等待该方法完成

这就是你所缺少的关键。方法可以在其工作完成之前返回。这就是异步的本质。方法返回,它返回一个任务,表示“这项工作正在进行中;完工后告诉我该做什么。”方法的工作没有完成,即使它已经返回。

在使用await操作符之前,您必须编写看起来像意大利面穿过瑞士奶酪的代码,以处理在完成之后还有工作要做,但返回和完成不同步的情况。Await允许您编写看起来像返回和完成是同步的代码,而实际上它们并没有同步。