Why are SocketChannel entries always running for the full amount even on non-blocking sockets?

Using Sun Java VM 1.5 or 1.6 on Windows, I plug in a non-blocking socket. Then I populate the ByteBuffer with an ByteBuffer message and try to write() in the SocketChannel.

I expect the recording to only complete partially if the amount to be written is greater than the amount of space in the TCP socket output buffer (this is what I intuitively expect, it is also pretty much my understanding of docs ), but it’s not something what's happening. write() seems to return a full write report, even if it's a few megabytes (SO_SNDBUF socket - 8 KB, much, much smaller than my message with a few megabytes).

The problem is that I cannot check the code that handles the case where the output is partially written (registering the WRITE set of interests in the selector and executing select() to wait until the remainder is written), since this case never happens. What? I do not understand?

+4
source share
6 answers

I was able to reproduce a situation that may be similar to yours. I think, ironically, your recipient consumes data faster than you write it.

 import java.io.InputStream; import java.net.ServerSocket; import java.net.Socket; public class MyServer { public static void main(String[] args) throws Exception { final ServerSocket ss = new ServerSocket(12345); final Socket cs = ss.accept(); System.out.println("Accepted connection"); final InputStream in = cs.getInputStream(); final byte[] tmp = new byte[64 * 1024]; while (in.read(tmp) != -1); Thread.sleep(100000); } } import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.channels.SocketChannel; public class MyNioClient { public static void main(String[] args) throws Exception { final SocketChannel s = SocketChannel.open(); s.configureBlocking(false); s.connect(new InetSocketAddress("localhost", 12345)); s.finishConnect(); final ByteBuffer buf = ByteBuffer.allocate(128 * 1024); for (int i = 0; i < 10; i++) { System.out.println("to write: " + buf.remaining() + ", written: " + s.write(buf)); buf.position(0); } Thread.sleep(100000); } } 

If you start the above server, and then make the above client try to write 10 fragments of 128 kilobytes of data, you will see that each write operation writes the entire buffer without blocking. However, if you modify the above server so that you don’t read anything from the connection, you will see that only the first write operation on the client will write 128 kb, while all subsequent writes will return 0 .

Output when the server reads the connection:

 to write: 131072, written: 131072 to write: 131072, written: 131072 to write: 131072, written: 131072 ... 

Output when the server does not read the connection:

 to write: 131072, written: 131072 to write: 131072, written: 0 to write: 131072, written: 0 ... 
+7
source

I worked with UDP in Java and actually saw something really interesting and completely undocumented in Java NIO. The best way to determine what is happening is to look at the source that ships with Java.

I would also really appreciate that you can find a better implementation of what you are looking for in any other JVM implementation such as IBM, but I cannot guarantee that I will not look at them myself.

+2
source

I cannot find it documented, but IIRC [1], send () is guaranteed to: a) completely send the supplied buffer or b) fail. He will never complete the submission partially.

[1] I wrote several implementations of Winsock (for Win 3.0, Win 95, Win NT, etc.), so this may be the behavior of Winsock (and not generic sockets).

+1
source

I will take a big leap of faith and assume that the main network provider for Java is the same as for C ... O / S allocates more than just SO_SNDBUF for each socket. I bet if you put your send code in a for (1 100000) loop, you will eventually get a record that will be executed with a lower value than required.

0
source

You really should look at the NIO infrastructure like MINA or Grizzly . I used MINA with great success on a corporate chat server. It is also used in the Openfire chat server. Grizzly is used in the Sun JavaEE implementation.

0
source

Where do you send data? Keep in mind that the network acts as a buffer that is at least equal to the size of your SO_SNDBUF plus the SO_RCVBUF receiver. Add this to the reading activity of the recipient, as mentioned by Alexander, and you can get a lot of data soaked up.

0
source

Source: https://habr.com/ru/post/1277363/


All Articles