I see at least one reason: when the remote client no longer reads data from its side of the TCP connection (with a call to recv () or something else).
Then the scenario is as follows: the client TCP receive buffer becomes full. And the OS can no longer receive TCP packets from its peer, as they cannot store them in the queue. As a result, the sending TCP buffer on the server side becomes full, as your application continues to send packets on the socket! Soon, your server will no longer be able to write to the socket, as the send () system call will:
- blocks an undefined value. (expecting the buffer to be empty enough for the new package)
- ot returns with an error EWOULDBLOCK. (if you configured your socket as non-blocking)
I used to meet this kind of use in a TEST environment when I put a breakpoint in my client-side code.
source share