When read () returns to a TCP socket

Can someone explain when exactly the read function that I use to receive data from a TCP socket will return?

I am using the code below to read from a measurement system. This system provides data transmission with a frequency of 15 Hz. READ_TIMEOUT_MSmatters 200 Also READ_BUFFER_SIZEmatters 40000. Everything works fine, but what happens is that it read()returns 15 times per second with readings 1349.

Reading Pitfall 5 in the following documentation, I would expect the buffer to be full:

http://www.ibm.com/developerworks/library/l-sockpit/

Init:

sock=socket(AF_INET, SOCK_STREAM, 0);
if (socket < 0)
{
    goto fail0;
}

struct sockaddr_in server;
server.sin_addr.s_addr = inet_addr(IPAddress);
server.sin_family = AF_INET;
server.sin_port = htons(Port);
if (connect(sock,(struct sockaddr *)&server, sizeof(server)))
{
    goto fail1;
}

struct timeval tv;
tv.tv_sec = READ_TIMEOUT_MS / 1000;
tv.tv_usec = (READ_TIMEOUT_MS % 1000) * 1000;
if (setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(struct timeval)))
{
    goto fail1;
}

return true;

fail1:
    close(sock);
    sock = -1;
fail0:
    return false;

Reading:

unsigned char buf[READ_BUFFER_SIZE];
int len = read(sock, buf, sizeof(buf));
if (len <= 0)
{
    return NULL;
}

CBinaryDataStream* pData = new CBinaryDataStream(len);
pData->WriteToStream(buf, len);
return pData;

, , , . , , .

+4

Source: https://habr.com/ru/post/1652944/


All Articles