Can I control the flow on my websites?

I use websockets to transfer video-y images from a server written in Go to the client, which is an HTML page. Below is my experience with Chrome.

I get images through the onmessage websocket handler. When I receive the image, I may need to complete several tasks asynchronously before I can display the image. Even if these tasks are not finished yet, another onmessage () may occur. I don’t want to queue images, because at the moment I can’t act as fast as it does on the server, and because it makes no sense to display old images. I also do not want to drop these images, I do not want to receive them at all.

Whether the client will use a traditional TCP connection, it will simply stop reading from the connection. This will lead to the filling of the reception buffer, closing the reception window and, ultimately, the suspension of sending images to the server. As soon as the client starts reading, the receive buffers will be empty, the receive window will open, and the server will resume transmission. Each time my server starts sending an image, it selects the most recent one. This is the best behavior, as well as TCP flow control, which guarantees reasonable behavior in many cases.

Is it possible to have TCP flow control features on which websites are based with web windows? I am particularly interested in a solution that depends on TCP flow control and application-level flow control, as this leads to an undesirable additional delay.

+10
source share
3 answers

I doubt you are asking, perhaps. An interface for this functionality is missing from the WebSocket API Specification . However, the specification is a requirement that the underlying socket be managed in the background outside of a script that uses WebSocket, so that the script is not blocked by WebSocket actions. When a socket accepts incoming data, it wraps the data inside the message and queues it for processing the WebSocket script. There is nothing to block the socket from reading more data while messages remain in the queue, waiting for the script to process them.

The only real flow control you can implement in WebSocket is explicit. When a message arrives, send a message to confirm it. Make the server wait to receive this response before sending the next message.

+3
source

You can perform flow control in WebSocket connections (based on the adaptation of the TCP proxy server). Here are two links to get you started:

Disclosure: I am the original author of Autobahn and work for Tavendo.

+1
source

It is now possible to have threads in a WebSocket. Chrome 78 will ship with a new WebSocketStream API that supports backpressure.

Here is a quote from Chrome platform status :

The WebSocket API provides a JavaScript interface for the RFC6455 WebSocket Protocol. While it served well, it is awkward with an ergonomic perspective and there is no important feature back pressure. The goal of the WebSocketStream API is to solve these shortcomings by integrating streams with the WebSocket API.

Currently, applying backpressure to received messages is not possible using the WebSocket API. When messages arrive faster than the page can process them, the rendering process either fills the memory buffering; these messages stop responding due to 100% CPU utilization or because of both.

Applying backpressure to sent messages is possible, but involves polling the bufferedAmount property, which is inefficient and unergonomic.

Unfortunately, this API is only for Chrome, and there is no web standard at the time of writing.

For more information see:

0
source

Source: https://habr.com/ru/post/956119/


All Articles