有很多关于WebSocket和HTTP的博客和讨论,很多开发者和网站都强烈提倡WebSocket,但我仍然不明白为什么。

例如(WebSocket爱好者的参数):

HTML5 Web Sockets代表了Web通信的下一次发展——通过Web上的单个套接字操作的全双工双向通信通道。——websocket.org

HTTP支持流:请求正文流(你在上传大文件时使用它)和响应正文流。

在与WebSocket连接期间,客户端和服务器每帧交换数据,每帧交换数据2字节,而当您进行连续轮询时,HTTP报头的交换数据为8千字节。

为什么这2个字节不包括TCP和TCP协议开销?

GET /about.html HTTP/1.1
Host: example.org

这是约48字节的HTTP报头。

HTTP分块编码-分块传输编码:

23
This is the data in the first chunk
1A
and this is the second one
3
con
8
sequence
0

因此,每个块的开销并不大。

此外,这两种协议都在TCP上工作,因此所有与长时间连接有关的TCP问题仍然存在。

问题:

为什么WebSockets协议更好? 为什么实现它而不是更新HTTP协议?


你似乎认为WebSocket是HTTP的替代品。事实并非如此。这是一种延伸。

WebSockets的主要用例是运行在web浏览器中的Javascript应用程序,并从服务器接收实时数据。游戏就是一个很好的例子。

在WebSockets出现之前,JavaScript应用程序与服务器交互的唯一方法是通过XmlHttpRequest。但这些都有一个主要的缺点:除非客户机显式地请求数据,否则服务器无法发送数据。

但是新的WebSocket特性允许服务器随时发送数据。这使得基于浏览器的游戏能够以更低的延迟实现,并且无需使用AJAX长轮询或浏览器插件等丑陋的技巧。

那么为什么不使用普通的HTTP流请求和响应呢

在对另一个答案的注释中,您建议只异步传输客户端请求和响应体。

In fact, WebSockets are basically that. An attempt to open a WebSocket connection from the client looks like a HTTP request at first, but a special directive in the header (Upgrade: websocket) tells the server to start communicating in this asynchronous mode. First drafts of the WebSocket protocol weren't much more than that and some handshaking to ensure that the server actually understands that the client wants to communicate asynchronously. But then it was realized that proxy servers would be confused by that, because they are used to the usual request/response model of HTTP. A potential attack scenario against proxy servers was discovered. To prevent this it was necessary to make WebSocket traffic look unlike any normal HTTP traffic. That's why the masking keys were introduced in the final version of the protocol.


1)为什么WebSockets协议更好?

WebSockets更适合于涉及低延迟通信的情况,特别是客户端到服务器消息的低延迟。对于服务器到客户端数据,使用长连接和分块传输可以获得相当低的延迟。然而,这并不能解决客户端到服务器的延迟问题,因为这需要为每个客户端到服务器的消息建立一个新的连接。

你的48字节HTTP握手对于现实世界的HTTP浏览器连接是不现实的,在现实世界中,经常有几千字节的数据作为请求的一部分(在两个方向上)发送,包括许多报头和cookie数据。下面是一个使用Chrome的请求/响应的例子:

请求示例(包含cookie数据的2800字节,不包含cookie数据的490字节):

GET / HTTP/1.1
Host: www.cnn.com
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.68 Safari/537.17
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: [[[2428 byte of cookie data]]]

示例响应(355字节):

HTTP/1.1 200 OK
Server: nginx
Date: Wed, 13 Feb 2013 18:56:27 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: CG=US:TX:Arlington; path=/
Last-Modified: Wed, 13 Feb 2013 18:55:22 GMT
Vary: Accept-Encoding
Cache-Control: max-age=60, private
Expires: Wed, 13 Feb 2013 18:56:54 GMT
Content-Encoding: gzip

HTTP和WebSockets都有相同大小的初始连接握手,但是对于WebSocket连接,初始握手执行一次,然后小消息只有6个字节的开销(头部2个字节,掩码值4个字节)。延迟开销与头文件的大小无关,而是与解析/处理/存储这些头文件的逻辑有关。此外,TCP连接建立延迟可能是比每个请求的大小或处理时间更大的因素。

2)为什么实现它而不是更新HTTP协议?

有努力重新设计HTTP协议,以实现更好的性能和更低的延迟,如SPDY, HTTP 2.0和QUIC。这将改善正常HTTP请求的情况,但WebSockets和/或WebRTC DataChannel的客户端到服务器数据传输的延迟可能仍然比HTTP协议低(或者它将以一种看起来很像WebSockets的模式使用)。

更新:

下面是一个考虑web协议的框架:

TCP: low-level, bi-directional, full-duplex, and guaranteed order transport layer. No browser support (except via plugin/Flash). HTTP 1.0: request-response transport protocol layered on TCP. The client makes one full request, the server gives one full response, and then the connection is closed. The request methods (GET, POST, HEAD) have specific transactional meaning for resources on the server. HTTP 1.1: maintains the request-response nature of HTTP 1.0, but allows the connection to stay open for multiple full requests/full responses (one response per request). Still has full headers in the request and response but the connection is re-used and not closed. HTTP 1.1 also added some additional request methods (OPTIONS, PUT, DELETE, TRACE, CONNECT) which also have specific transactional meanings. However, as noted in the introduction to the HTTP 2.0 draft proposal, HTTP 1.1 pipelining is not widely deployed so this greatly limits the utility of HTTP 1.1 to solve latency between browsers and servers. Long-poll: sort of a "hack" to HTTP (either 1.0 or 1.1) where the server does not respond immediately (or only responds partially with headers) to the client request. After a server response, the client immediately sends a new request (using the same connection if over HTTP 1.1). HTTP streaming: a variety of techniques (multipart/chunked response) that allow the server to send more than one response to a single client request. The W3C is standardizing this as Server-Sent Events using a text/event-stream MIME type. The browser API (which is fairly similar to the WebSocket API) is called the EventSource API. Comet/server push: this is an umbrella term that includes both long-poll and HTTP streaming. Comet libraries usually support multiple techniques to try and maximize cross-browser and cross-server support. WebSockets: a transport layer built-on TCP that uses an HTTP friendly Upgrade handshake. Unlike TCP, which is a streaming transport, WebSockets is a message based transport: messages are delimited on the wire and are re-assembled in-full before delivery to the application. WebSocket connections are bi-directional, full-duplex and long-lived. After the initial handshake request/response, there is no transactional semantics and there is very little per message overhead. The client and server may send messages at any time and must handle message receipt asynchronously. SPDY: a Google initiated proposal to extend HTTP using a more efficient wire protocol but maintaining all HTTP semantics (request/response, cookies, encoding). SPDY introduces a new framing format (with length-prefixed frames) and specifies a way to layering HTTP request/response pairs onto the new framing layer. Headers can be compressed and new headers can be sent after the connection has been established. There are real world implementations of SPDY in browsers and servers. HTTP 2.0: has similar goals to SPDY: reduce HTTP latency and overhead while preserving HTTP semantics. The current draft is derived from SPDY and defines an upgrade handshake and data framing that is very similar the the WebSocket standard for handshake and framing. An alternate HTTP 2.0 draft proposal (httpbis-speed-mobility) actually uses WebSockets for the transport layer and adds the SPDY multiplexing and HTTP mapping as an WebSocket extension (WebSocket extensions are negotiated during the handshake). WebRTC/CU-WebRTC: proposals to allow peer-to-peer connectivity between browsers. This may enable lower average and maximum latency communication because as the underlying transport is SDP/datagram rather than TCP. This allows out-of-order delivery of packets/messages which avoids the TCP issue of latency spikes caused by dropped packets which delay delivery of all subsequent packets (to guarantee in-order delivery). QUIC: is an experimental protocol aimed at reducing web latency over that of TCP. On the surface, QUIC is very similar to TCP+TLS+SPDY implemented on UDP. QUIC provides multiplexing and flow control equivalent to HTTP/2, security equivalent to TLS, and connection semantics, reliability, and congestion control equivalentto TCP. Because TCP is implemented in operating system kernels, and middlebox firmware, making significant changes to TCP is next to impossible. However, since QUIC is built on top of UDP, it suffers from no such limitations. QUIC is designed and optimised for HTTP/2 semantics.

引用:

HTTP:

维基百科HTTP页面 HTTP相关草案/协议的W3C列表 IETF HTTP/1.1和HTTP/2.0草案列表

服务器发送的事件:

W3C服务器发送事件/事件源候选推荐 W3C服务器发送事件/事件源草案

尚:

IETF RFC 6455 WebSockets协议 IETF RFC 6455 WebSocket勘误表

SPDY:

IETF SPDY草案

HTTP 2.0:

IETF HTTP 2.0 httpbis-http2草案 IETF HTTP 2.0 httpbis-speed-mobility草案 IETF httpbis网络友好草案——一个较旧的HTTP 2.0相关提案

WebRTC:

W3C WebRTC API草案 IETF WebRTC草案列表 IETF WebRTC概述草案 IETF WebRTC数据通道草案 微软CU-WebRTC提案开始页

QUIC:

QUIC铬项目 IETF QUIC草案


对于TL;DR,这里有2美分和一个简单的版本来回答你的问题:

WebSockets provides these benefits over HTTP: Persistent stateful connection for the duration of the connection Low latency: near-real-time communication between server/client due to no overhead of reestablishing connections for each request as HTTP requires. Full duplex: both server and client can send/receive simultaneously WebSocket and HTTP protocol have been designed to solve different problems, I.E. WebSocket was designed to improve bi-directional communication whereas HTTP was designed to be stateless, distributed using a request/response model. Other than sharing the ports for legacy reasons (firewall/proxy penetration), there isn't much common ground to combine them into one protocol.


其他答案似乎没有触及这里的一个关键方面,那就是您没有提到需要支持web浏览器作为客户端。以上纯HTTP的大多数限制都是假设您将使用浏览器/ JS实现。

HTTP协议完全能够实现全双工通信;客户端执行带有分块编码传输的POST,服务器返回带有分块编码主体的响应,这是合法的。这将在初始化时删除头开销。

因此,如果你所追求的是全双工,同时控制客户端和服务器,并且对WebSockets的额外帧/特性不感兴趣,那么我认为HTTP是一种更简单的方法,具有更低的延迟/CPU(尽管延迟实际上只会以微秒或更短的时间为单位)。


为什么WebSockets协议更好?

我不认为我们可以把他们放在一起比较,比如谁更好。仅仅因为他们在解决两个不同的问题,这样的比较是不公平的。他们的要求是不同的。这就像比较苹果和橘子一样。他们是不同的。

HTTP是一种请求-响应协议。客户端(浏览器)想要什么,服务器就给它。这是。如果客户端想要的数据很大,服务器可能会发送流数据以避免不必要的缓冲区问题。这里的主要需求或问题是如何从客户端发出请求,以及如何响应他们请求的资源(超文本)。这就是HTTP的亮点所在。

在HTTP中,只有客户端请求。服务器只响应。

WebSocket is not a request-response protocol where only the client can request. It is a socket(very similar to TCP socket). Mean once the connection is open, either side can send data until the underlining TCP connection is closed. It is just like a normal socket. The only difference with TCP socket is WebSocket can be used on the web. On the web, we have many restrictions on a normal socket. Most firewalls will block other ports than 80 and 433 that HTTP used. Proxies and intermediaries will be problematic as well. So to make the protocol easier to deploy to existing infrastructures WebSocket use HTTP handshake to upgrade. That means when the first time connection is going to open, the client sent an HTTP request to tell the server saying "That is not HTTP request, please upgrade to WebSocket protocol".

Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13

一旦服务器理解了请求并升级到WebSocket协议,就不再应用HTTP协议了。

所以我的回答是,谁也不比谁强。它们完全不同。

为什么实现它而不是更新HTTP协议?

我们也可以把所有东西都命名为HTTP。但是我们可以吗?如果它们是两种不同的东西,我会喜欢两个不同的名字。希克森和迈克尔·卡特也是。


常规REST API使用HTTP作为通信的底层协议,它遵循请求和响应范式,这意味着通信涉及客户端从服务器请求一些数据或资源,服务器对该客户端作出响应。但是,HTTP是一种无状态协议,因此每个请求-响应周期最终都必须重复报头和元数据信息。在频繁重复请求-响应周期的情况下,这会导致额外的延迟。

使用WebSockets,尽管通信仍然以初始的HTTP握手开始,但它会进一步升级以遵循WebSockets协议(即,如果服务器和客户端都符合该协议,因为不是所有实体都支持WebSockets协议)。

现在有了WebSockets,就有可能在客户端和服务器之间建立一个全双工的持久连接。这意味着与请求和响应不同,只要应用程序在运行(即它是持久的),连接就会保持打开,并且由于它是全双工的,双向同步通信是可能的,即现在服务器能够发起通信,并在新数据(客户端感兴趣的)可用时“推送”一些数据给客户端。

The WebSockets protocol is stateful and allows you to implement the Publish-Subscribe (or Pub/Sub) messaging pattern which is the primary concept used in the real-time technologies where you are able to get new updates in the form of server push without the client having to request (refresh the page) repeatedly. Examples of such applications are Uber car's location tracking, Push Notifications, Stock market prices updating in real-time, chat, multiplayer games, live online collaboration tools, etc.

你可以在Websockets上查看一篇深入的文章,它解释了这个协议的历史,它是如何形成的,它的用途是什么,以及你如何自己实现它。

这是我做的一个关于WebSockets的演示视频,以及它们与使用常规REST api的不同之处:标准化和利用数据流的指数级增长