Why the EventMachine outgoing information buffer may stop sending and just fill forever (while other connections can still send)

I have an EventMachine server sending TCP data to a Mac client (via GCDAsyncSocket). It always works flawlessly for a while, but inevitably the server suddenly stops sending data on a connection by connection basis. The connection is still supported, and the server is still receiving data from the client, but it does not go any other way.

When this happens, I discovered through the connection # get_outbound_data_size that the send buffer of the connection is infinite (through #send_data) and not sent to the client.

Are there specific (and hopefully correctable) reasons why this could happen? The reactor continues to hum, and other active connections to the server continue to work normally (although they sometimes also end up in buffer hell).

+6
source share
2 answers

I see at least one reason: when the remote client no longer reads data from its side of the TCP connection (with a call to recv () or something else).

Then the scenario is as follows: the client TCP receive buffer becomes full. And the OS can no longer receive TCP packets from its peer, as they cannot store them in the queue. As a result, the sending TCP buffer on the server side becomes full, as your application continues to send packets on the socket! Soon, your server will no longer be able to write to the socket, as the send () system call will:

  • blocks an undefined value. (expecting the buffer to be empty enough for the new package)
  • ot returns with an error EWOULDBLOCK. (if you configured your socket as non-blocking)

I used to meet this kind of use in a TEST environment when I put a breakpoint in my client-side code.

0
source

On March 23, a patch was applied to GCDAsyncSocket that prevents reading from stopping. Did this patch remove your problem?

0
source

Source: https://habr.com/ru/post/908073/


All Articles