C Unix Millisecond Delay 0 Return Timer

Can someone explain why I always get time 0 from the code below? I just want the millisecond timer to calculate the delay between sending and receiving data from the socket, but no matter what I try, I always get the result 0 ... I even tried microseconds just in case my system executed it less than 1 ms

    printf("#: ");

    bzero(buffer,256);
    fgets(buffer,255,stdin);

    struct timeval start, end;

    unsigned long mtime, seconds, useconds;    

    gettimeofday(&start, NULL);  

    n = write(clientSocket,buffer,strlen(buffer));

    if (n < 0)
    {
        error("Error: Unable to write to socket!\n");
    }

    bzero(buffer,256);
    n = read(clientSocket,buffer,255);

    gettimeofday(&end, NULL);

    seconds  = end.tv_sec  - start.tv_sec;
    useconds = end.tv_usec - start.tv_usec;

    mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;      

    if (n < 0) 
    {
        error("Error: Unable to read from socket!\n");
    }

    printf("%s\n",buffer);
    printf("Delay: %lu microseconds\n", useconds);
+1
source share
2 answers

Assuming your result in mtime: mtime is an integer, and you calculate elapsed time with float numbers, so if

((seconds) * 1000 + useconds/1000.0) + 0.5

evaluates to <1.0, other than integer, reduce it to 0

just change the mtime type to float or if you can use microseconds

((seconds) * 1000000 + useconds) + 500
0

useconds = end.tv_usec - start.tv_usec; unsigned long useconds;, .

:

unsigned long end_us,start_us,elapsed_us;

   .
   .
   .

end_us     = end.tv_sec   * 1000000  +  end.tv_usec;
start_us   = start.tv_sec * 1000000  +  start.tv_usec;

elapsed_us = end_us - start_us;

printf("elapsed microseconds: %lu\n", elapsed_us);

mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;  

: . , 0.5.

:

elapsed_ms = elapsed_us / 1000;

elapsed_ms . .

0

Source: https://habr.com/ru/post/1606245/


All Articles