Why is the timeout in the udp windows receiving core always 500 ms longer than SO_RCVTIMEO?

Easy to reproduce, here is the psuedo code of what I am doing:

  • UDP socket setup
  • Set the timeout to ( Timeout set )
  • Check the timeout I set ( Timeout checked )
  • Trying to get on this socket (when there is no traffic).
  • Time required for a timeout. ( Time until Timeout )

When I do this, I get the following output:

 Timeout set: 0.1s | Timeout checked: 0.1s | Time until timeout: 0.6s | difference: 0.5s Timeout set: 0.2s | Timeout checked: 0.2s | Time until timeout: 0.7s | difference: 0.5s Timeout set: 0.4s | Timeout checked: 0.4s | Time until timeout: 0.9s | difference: 0.5s Timeout set: 0.8s | Timeout checked: 0.8s | Time until timeout: 1.3s | difference: 0.5s Timeout set: 1.6s | Timeout checked: 1.6s | Time until timeout: 2.1s | difference: 0.5s Timeout set: 3.2s | Timeout checked: 3.2s | Time until timeout: 3.7s | difference: 0.5s 

Why does the windows udp socket timeout always work for 500 ms more than what is set in setsockopt?

Looking at setsockopt here I don't see any information about why this is happening in sections involving SO_RCVTIMEO .


Code to play:

 #include "stdafx.h" #include "winsock2.h" #include <chrono> #include <iostream> int main() { WORD wVersionRequested; WSADATA wsaData; wVersionRequested = MAKEWORD(2, 2); int err = WSAStartup(wVersionRequested, &wsaData); if (err != 0) { printf("WSAStartup failed with error: %d\n", err); while (true); } sockaddr_in socketAddress = { 0 }; socketAddress.sin_family = PF_INET; socketAddress.sin_port = htons(1010); socketAddress.sin_addr.s_addr = INADDR_ANY; // Create the socket SOCKET mSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); if (!mSocket) { printf("Socket failed with error code : %d", WSAGetLastError()); while (true); } //Bind if (bind(mSocket, (struct sockaddr *)&socketAddress, sizeof(socketAddress)) == SOCKET_ERROR) { printf("Bind failed with error code : %d", WSAGetLastError()); while (true); } // Receive nothing over several different set timeouts for (double timeout = 0.1; timeout < 4.0; timeout *= 2) { // Set timeout DWORD lBuffer[2] = { 0, 0 }; int lSize; lBuffer[0] = static_cast<DWORD>(1000.0 * timeout); lSize = sizeof(DWORD); if (setsockopt(mSocket, SOL_SOCKET, SO_RCVTIMEO, (char*)lBuffer, lSize) != 0) { printf("Set socket option failed with error code : %d", WSAGetLastError()); while (true); } // Check that we get what we set. DWORD lBufferout[2] = { 0, 0 }; if (getsockopt(mSocket, SOL_SOCKET, SO_RCVTIMEO, (char*)lBufferout, &lSize) != 0) { printf("Set socket option failed with error code : %d", WSAGetLastError()); while (true); } // Receive and time char buffer[50]; sockaddr_in senderAddr; int senderAddrSize = sizeof(senderAddr); auto s = std::chrono::steady_clock::now(); int transferred = recvfrom(mSocket, (char*)buffer, 50, 0, (sockaddr*)&senderAddr, &senderAddrSize); auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - s).count() / 1000.0; std::cout << "Timeout set: " << timeout << "s | Timeout checked: " << lBufferout[0] / 1000.0 << "s | Time until timeout: " << duration << "s | difference: " << duration - timeout << "s\n"; } while (true); return 0; } 

Note. This code expects no traffic on port 1010. If not, change this number.

+2
source share
1 answer

It is indicated here:

On SO_RCVTIMEO, there is an undocumented minimum limit of about 500 ms.

This is probably implemented, always adding 500 ms to any value set for SO_RCVTIMEO .

0
source

Source: https://habr.com/ru/post/1275759/


All Articles