Why do I need to sleep (1) so that the socket merges?

I downloaded the source code for a simple static web server from http://www.ibm.com/developerworks/systems/library/es-nweb/sidefile1.html

However, line 130 confuses me:

#ifdef LINUX sleep(1); /* to allow socket to drain */ #endif exit(1); 

Since there is no proximity for the socket, does this mean that I need to wait for the client to close the socket?

+5
source share
2 answers

Regardless of the intention of the author, it is unnecessary and incorrect . exit() enough. When close() is called on a TCP socket or exit() is called to terminate the process, if the socket parameter SO_LINGER not set by default, the kernel will support the sockets on hold and try to deliver any invalid / buffered data. You can see this with netstat and is the reason that a quick restart of a TCP server that is not written for a quick restart will have problems reopening the port (there is a good way to do this too).

I disagree with a few things in the accepted answer.

close() and exit() should have the same effect on the socket, traditionally this was only a matter of style, be it close sockets, if you were on exit .

It should have nothing to do with TCP transfer buffer overflow, as this happens after all entries. A full write buffer immediately returns an error with the return code write() ; sleeping at the end will be irrelevant.

sleep(1) should not affect socket buffer or reliable data delivery. In any case, this code throttles the child processes of the web server after writing, so it really does not have a good effect and can actually increase the potential for a denial of service attack.

I describe the default operation. The default can be changed using many options.

For a “bible” on socket programming, see W. Richard Steven UNIX Network Programming - Network Interfaces: Sockets and XTI, for a detailed description of this.

+4
source

This looks like a bit messy code for me.

If the open socket process terminates and there is some unwritten data in the socket, the kernel will break the socket without flushing out the unsent data.

When you write something on a socket, written data will not necessarily be transmitted immediately. The kernel supports a small buffer that collects data written to the socket. Or a pipe. It is more efficient for the process to continue and then the kernel will take care of the actual transfer of the recorded data when it has time for this.

The process can obviously write data to the socket much faster than it can be transmitted over a typical network interface, and the size of the internal buffer of the socket is limited, so if the process continues to write data to the socket, at some point it will fill the internal buffer and will have to wait for the kernel to actually transmit data, and delete the recorded data from the internal buffer before there is more room for recording.

[*] I omit some technical details, for example, that data is not considered to be recorded until the receiver accepts it.

In any case, the purpose of this call to sleep () is to provide some time for the actual transfer of the internal buffer before the process terminates, because if it does this before the actual data is written, the kernel will win. Don’t bother sending it and terminate the socket as I just mentioned.

This is a bad example. This is the wrong way to do such things. The socket should just be close () d. This will properly clean things and make sure that everything happens where it should go. I see no good reason why this example simply incorrectly closed the socket instead of tackling this kind of hacker.

+1
source

Source: https://habr.com/ru/post/1204072/


All Articles