I use tcp sockets to provide interprocess communication between two applications in Windows XP. I chose tcp sockets for various reasons. I see an average round-trip time of 2.8 ms. This is much slower than I expected. Profiling seems to show that a delay occurs between one application calling send and returning the other end.
I also have applications, a daemon and a client. They are structured like this pseudocode:
Daemon thread 1 (Listens for new connections):
while (1) {
SOCKET listener_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(listener_socket, (SOCKADDR*)&server_info, sizeof(SOCKADDR));
listen(listener_socket, 1);
SOCKET client_socket = accept(listener_socket, NULL, NULL);
closesocket(listener_socket);
CreateThread(client_thread);
}
thread Demon client_socket (listens for packets from the client):
char cmdBuf[256];
int cmdBufAmountData = 0;
while (1)
{
char recvBuf[128];
int bytesTransferred = recv(m_clientSocket, recvBuf, sizeof(recvBuf), 0);
memcpy(cmdBuf + cmdBufAmountData, recvBuf, bytesTransferred);
cmdBufAmountData += bytesTransferred;
while (commandExists(cmdBuf, cmdBufAmountData))
{
send(m_clientSocket, outBuf, msgLen, 0);
for (int i = 0; i < cmdBufAmountData - cmdLen; i++)
cmdBuf[i] = cmdBuf[i + cmdLen];
cmdBufAmountData -= cmdLen;
}
}
Client Stream 1:
start_timer();
send(foo);
recv(barBuf);
end_timer(); // Timer shows values from 0.7ms to 17ms. Average 2.8ms.
Any ideas why latency is so bad? I suspected a Nagel algorithm, but stuffed my code:
BOOL bOptVal = TRUE;
setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, (char*)&bOptVal, sizeof(BOOL));
Does not help. Do I need to do this on both client and daemons (I do)?
, ..