How to determine WinSock TCP timeout using BindIoCompletionCallback

I am creating a Visual C ++ WinSock TCP server using BindIoCompletionCallback, it works fine with receiving and sending data, but I cannot find a good way to determine the timeout: SetSockOpt / SO_RCVTIMEO / SO_SNDTIMEO does not affect non-blocking sockets if peer does not send no data, CompletionRoutine is not called at all.

I am thinking about using RegisterWaitForSingleObject with the hEvent OVERLAPPED field, which may work, but then CompletionRoutine is not required at all, am I still using IOCP? is there a performance issue if I use only RegisterWaitForSingleObject and not use BindIoCompletionCallback?

Update: Code Example:

My first attempt:

bool CServer::Startup() { SOCKET ServerSocket = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED); WSAEVENT ServerEvent = WSACreateEvent(); WSAEventSelect(ServerSocket, ServerEvent, FD_ACCEPT); ...... bind(ServerSocket......); listen(ServerSocket......); _beginthread(ListeningThread, 128 * 1024, (void*) this); ...... ...... } void __cdecl CServer::ListeningThread( void* param ) // static { CServer* server = (CServer*) param; while (true) { if (WSAWaitForMultipleEvents(1, &server->ServerEvent, FALSE, 100, FALSE) == WSA_WAIT_EVENT_0) { WSANETWORKEVENTS events = {}; if (WSAEnumNetworkEvents(server->ServerSocket, server->ServerEvent, &events) != SOCKET_ERROR) { if ((events.lNetworkEvents & FD_ACCEPT) && (events.iErrorCode[FD_ACCEPT_BIT] == 0)) { SOCKET socket = accept(server->ServerSocket, NULL, NULL); if (socket != SOCKET_ERROR) { BindIoCompletionCallback((HANDLE) socket, CompletionRoutine, 0); ...... } } } } } } VOID CALLBACK CServer::CompletionRoutine( __in DWORD dwErrorCode, __in DWORD dwNumberOfBytesTransfered, __in LPOVERLAPPED lpOverlapped ) // static { ...... BOOL res = GetOverlappedResult(......, TRUE); ...... } class CIoOperation { public: OVERLAPPED Overlapped; ...... ...... }; bool CServer::Receive(SOCKET socket, PBYTE buffer, DWORD length, void* context) { if (connection != NULL) { CIoOperation* io = new CIoOperation(); WSABUF buf = {length, (PCHAR) buffer}; DWORD flags = 0; if ((WSARecv(Socket, &buf, 1, NULL, &flags, &io->Overlapped, NULL) != 0) && (GetLastError() != WSA_IO_PENDING)) { delete io; return false; } else return true; } return false; } 

As I said, it works fine if the client really sends me the data, "Receive" does not block, CompletionRoutine received the call, the received data, but here is one of them, if the client does not send me any data, how can I refuse timeout?

Since SetSockOpt / SO_RCVTIMEO / SO_SNDTIMEO will not help here, I think I should use the hEvent field in the OVERLAPPED structure, which will be signaled when IO completes, but WaitForSingleObject / WSAWaitForMultipleEvents on this will block the receiving call, and I always want to Receive so I used RegisterWaitForSingleObject and WAITORTIMERCALLBACK. it worked, callback received a call after a timeout or I / O completion, but now I have two callbacks for any I / O operation, CompletionRoutine and WaitOrTimerCallback:

if the IO is completed, they will be called at the same time, if the IO is not completed, WaitOrTimerCallback will be called, then I call CancelIoEx, this caused the CompletionRoutine call with some ABORTED error, but here is the race condition, maybe the IO will be completed before I cancel it , then ... blabbal, it's all pretty complicated.

Then I realized that I really do not need BindIoCompletionCallback and CompletionRoutine in general, and to do everything from WaitOrTimerCallback, this might work, but here's an interesting question, I would like to create an IOCP-based Winsock server first, and thought BindIoCompletionCallback is the easiest way to do this using threadpool, which is supported by Windows itself, now I get a server without IOCP code at all? Is it still an IOCP? or should I forget the BindIoCompletionCallback and create my own implementation of IOPP threadpool? why?

+4
source share
2 answers

What I did was force a timeout / completion notification of a critical section in the socket object. Once upon a time, a winner can set a socket state variable and perform its action, whatever that is. If the first completion of the I / O occurs, the array of the I / O buffer is processed in the usual way, and any timeout is sent to restart the state machine. Similarly, if the timeout starts first, I / O receives CancelIOEx'd, and any subsequent notification of the end of the queue is discarded by the status engine. Because of these possible β€œlate” notifications, I put the issued sockets on the wait queue and only overwrite them in the socket object pool after five minutes, in the same way that the TCP stack itself places its sockets in TIME_WAIT.

To execute timeouts, I have one thread that works with FIFO delta queues of timeout objects, one queue for each timeout limit. The thread is waiting for input queues for new objects with a timeout calculated from the shortest time-out-time for objects at the beginning of the queues.

There were only a few timeouts in the server, so I used the queues set at compile time. It would be quite easy to add new queues or change the timeout by sending the corresponding β€œcommand” messages to the stream input queue mixed with new sockets, but I did not understand that far.

After the expiration of the time, a stream called an event in the object, which, in the case of a socket, enters the CS-protected state-machine socket object (this was the TimeoutObject class from which the socket originated, among other things).

More details:

I am waiting on a semaphore that controls the input queue of a timeout thread. If this is signaled, I get a new TimeoutObject from the input queue and add it to the end of any wait queue that it requests. If the semaphore timeout expires, I check the elements in the heads of the FIFO queues with a timeout and recalculate the remaining interval, judging the time of the current time from the timeout. If the interval is 0 or negative, a timeout event is raised. When repeating the queues and their heads, I save in the local minimum remaining interval until the next timeout. Hwn all header elements in all queues have a non-zero remaining interval, I return to waiting in the queue semaphore using the minimum remaining interval that I have accumulated.

An event call returns numbering. This listing instructs the timeout thread how to handle the object whose event it just fired. One option is to restart the timeout by recounting the timeout and returning the object back to the waiting queue at the end.

I did not use RegisterWaitForSingleObject () because it needs .NET and my Delphi server was unmanaged (I already wrote my server!).

This is because IIRC, it has a limit of 64 descriptors, for example WaitForMultipleObjects (). There were more than 23,000 clients on my server. I found that one timeout thread and several FIFO queues are more flexible - any old object can be delayed on it if it came from TimeoutObject - no additional OS calls / descriptors are required.

0
source

The basic idea is that since you are using asynchronous I / O with a pool of system threads, you do not need to check timeouts through events because you are not blocking any threads.

The recommended way to check for obsolete connections is to getsockopt with the SO_CONNECT_TIME option. This returns the number of seconds during which the socket has been connected. I know the polling operation, but if you are smart about how and when you request this value, this is actually a pretty good mechanism for managing connections. Below I will explain how this is done.

Typically, I will call getsockopt in two places: one at the time of my completion callback (so that I have a timestamp the last time I / O was completed on this socket), and one at my agreement thread.

The received stream tracks my socket drive through WSAEventSelect and the FD_ACCEPT parameter. This means that the accept thread is executed only when Windows determines that there are incoming connections that require acceptance. At this time, I list the accepted sockets and request SO_CONNECT_TIME again for each socket. I subtract the timestamp of the last connection I / O completion from this value, and if the difference is higher than the specified threshold, my code considers that the connection has a timeout.

0
source

Source: https://habr.com/ru/post/1388705/


All Articles