在node.js中,请求应该是IO绑定,而不是CPU绑定。这意味着每个请求不应该强迫node.js做大量的计算。如果在解决请求时涉及大量计算,那么node.js不是一个好的选择。IO界需要很少的计算量。请求的大部分时间都花在对DB或服务的调用上。
Node.js有单线程事件循环,但它只是一个厨师。在后台,大部分工作是由操作系统完成的,Libuv确保了与操作系统的通信。Libuv的文档如下:
在事件驱动编程中,应用程序表示感兴趣的
当某些事件发生时,对它们做出反应。的责任
从操作系统收集事件或监视其他
事件源由libuv处理,用户可以注册
事件发生时调用的回调。
The incoming requests are handled by the Operating system. This is pretty much correct for almost all servers based on request-response model. Incoming network calls are queued in OS Non-blocking IO queue.'Event Loop constantly polls OS IO queue that is how it gets to know about the incoming client request. "Polling" means checking the status of some resource at a regular interval. If there are any incoming requests, evnet loop will take that request, it will execute that synchronously. while executing if there is any async call (i.e setTimeout), it will be put into the callback queue. After the event loop finishes executing sync calls, it can poll the callbacks, if it finds a callback that needs to be executed, it will execute that callback. then it will poll for any incoming request. If you check the node.js docs there is this image:
从文档阶段概述
poll:检索新的I/O事件;执行I/O相关的回调(几乎
除了关闭回调,由
定时器,和setimmediation ());节点将在适当的时候阻塞在这里。
事件循环不断地从不同队列轮询。如果一个请求需要外部调用或磁盘访问,这将被传递给操作系统,操作系统也有2个不同的队列。一旦事件循环检测到某些事情必须异步完成,它就会将它们放入队列中。一旦它被放入队列中,事件循环将处理到下一个任务。
这里要提到的一件事是,事件循环持续运行。只有Cpu可以将这个线程移出Cpu,事件循环本身不会这样做。
从文档中可以看出:
The secret to the scalability of Node.js is that it uses a small
number of threads to handle many clients. If Node.js can make do with
fewer threads, then it can spend more of your system's time and memory
working on clients rather than on paying space and time overheads for
threads (memory, context-switching). But because Node.js has only a
few threads, you must structure your application to use them wisely.
Here's a good rule of thumb for keeping your Node.js server speedy:
Node.js is fast when the work associated with each client at any given
time is "small".
注意,小任务意味着IO绑定任务而不是CPU绑定任务。只有当每个请求的工作主要是IO工作时,单个事件循环才会处理客户机负载。
Context switch basically means CPU is out of resources so It needs to stop the execution of one process to allow another process to execute. OS first has to evict process1 so it will take this process from CPU and it will save this process in the main memory. Next, OS will restore process2 by loading process control block from memory and it will put it on the CPU for execution. Then process2 will start its execution. Between process1 ended and the process2 started, we have lost some time. Large number of threads can cause a heavily loaded system to spend precious cycles on thread scheduling
and context switching, which adds latency and imposes limits on scalability and throughput.