HELP PLEASE! I have an application that is as close to real-time processing as possible, and I continue to face this unusual delay problem with TCP and UDP. The delay occurs, for example, clockwise, and it is always equal to the duration (mainly from 15 to 16 ms). This happens when transferring to any machine (eve local) and on any network (we have two).
Quick start of the problem:
I always use winsock in C ++ compiled in VS 2008 Pro, but I wrote several programs to send and receive in various ways using both TCP and UDP. I always use an intermediate program (executed locally or remotely) written in different languages (MATLAB, C #, C ++) to send information from one program to another. Both winsock programs run on the same machine, so they display timestamps for Tx and Rx from the same clock. I continue to see the pattern appearing when a packet of packets is transmitted, and then it will be delayed from about 15 to 16 milliseconds until the next burst, despite the fact that no delay has been programmed into it. Sometimes it can be 15 to 16 ms between each packet, rather than a packet of packets. At other times (rarely) I will have another delay in length,e.g. ~ 47 ms. It seems that I always receive packets back within a millisecond of them, which are transmitted, although with the same delay that occurs between transmitted packets.
I have a suspicion that winsock or NIC will buffer packets before each transfer, but I did not find any evidence. I have a gigabit connection to a single network that receives different levels of traffic, but I also experience the same thing when running a middleware in a cluster with a private network without traffic (at least from users) and a 2 gigabit connection. I will even experience this delay when starting the middleware locally using send and receive programs.
source
share