Socket reading timeout under windows: weird hard code in native method

I tried to understand how the socket read timeout is handled in native code and found there a weird hard-coded value of 5000 milliseconds:

if (timeout) { if (timeout <= 5000 || !isRcvTimeoutSupported) { int ret = NET_Timeout (fd, timeout); ..... ..... } } 

Source: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/java/net/SocketInputStream.c

As I can see, the variable isRcvTimeoutSupported is usually set to true, but can be reset to false when configuring the socket settings:

  /* * SO_RCVTIMEO is only supported on Microsoft implementation * of Windows Sockets so if WSAENOPROTOOPT returned then * reset flag and timeout will be implemented using * select() -- see SocketInputStream.socketRead. */ if (isRcvTimeoutSupported) { jclass iCls = (*env)->FindClass(env, "java/lang/Integer"); jfieldID i_valueID; jint timeout; CHECK_NULL(iCls); i_valueID = (*env)->GetFieldID(env, iCls, "value", "I"); CHECK_NULL(i_valueID); timeout = (*env)->GetIntField(env, value, i_valueID); /* * Disable SO_RCVTIMEO if timeout is <= 5 second. */ if (timeout <= 5000) { timeout = 0; } if (setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout, sizeof(timeout)) < 0) { if (WSAGetLastError() == WSAENOPROTOOPT) { isRcvTimeoutSupported = JNI_FALSE; } else { NET_ThrowCurrent(env, "setsockopt SO_RCVTIMEO"); } } ...... ...... } 

Source: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/java/net/TwoStacksPlainSocketImpl.c

Although I'm not quite sure, it seems that the read timeout should be set via socket parameters if it exceeds 5 seconds and through NET_Timeout if it is less than or equal to 5 seconds. Is it correct?

And anyway, where did these hardcoded 5 seconds come from?
I do not see any explanation on MSDN.

+5
source share
2 answers

This code decides how to implement a blocking socket timeout - whether to do it with SO_RCVTIMEO or with select , because Windows sockets are supported ( SO_RCVTIMEO not supported on all platforms, and not even all Windows implementations can support it, as indicated in the comment on code). NET_Timeout uses select under covers.

Basically the algorithm:

 if SO_RCVTIMEO is supported and timeout is more than 5 seconds: use SO_RCVTIMEO else: use select 

As for the threshold of 5 seconds, I believe that they somehow found out (through testing or trial version and error?) That select more reliable or more accurate for timeout values โ€‹โ€‹of less than 5 seconds. Not that this was due precisely to this problem (this is for another OS), but here is an example of someone reporting that SO_RCVTIMEO is unreliable for small timeout values.

+1
source

Yes, you read the code correctly.

There seems to be an undocumented lower limit for the timeout for Windows sockets, as described in Why is the timeout in the Windows udp receive kernel always 500ms longer than SO_RCVTIMEO set?

I assume that they reached the limit in testing, did not try to find the exact value and therefore simply chose 5s as the lower limit, which, as you know, worked.

+1
source

Source: https://habr.com/ru/post/1275758/


All Articles