I tried to understand how the socket read timeout is handled in native code and found there a weird hard-coded value of 5000 milliseconds:
if (timeout) { if (timeout <= 5000 || !isRcvTimeoutSupported) { int ret = NET_Timeout (fd, timeout); ..... ..... } }
Source: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/java/net/SocketInputStream.c
As I can see, the variable isRcvTimeoutSupported is usually set to true, but can be reset to false when configuring the socket settings:
if (isRcvTimeoutSupported) { jclass iCls = (*env)->FindClass(env, "java/lang/Integer"); jfieldID i_valueID; jint timeout; CHECK_NULL(iCls); i_valueID = (*env)->GetFieldID(env, iCls, "value", "I"); CHECK_NULL(i_valueID); timeout = (*env)->GetIntField(env, value, i_valueID); if (timeout <= 5000) { timeout = 0; } if (setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout, sizeof(timeout)) < 0) { if (WSAGetLastError() == WSAENOPROTOOPT) { isRcvTimeoutSupported = JNI_FALSE; } else { NET_ThrowCurrent(env, "setsockopt SO_RCVTIMEO"); } } ...... ...... }
Source: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/windows/native/java/net/TwoStacksPlainSocketImpl.c
Although I'm not quite sure, it seems that the read timeout should be set via socket parameters if it exceeds 5 seconds and through NET_Timeout if it is less than or equal to 5 seconds. Is it correct?
And anyway, where did these hardcoded 5 seconds come from?
I do not see any explanation on MSDN.
source share