Is there an algorithm for estimating clock skew that will work on Http?

I am writing a multiplayer game for Windows Phone 7. I need to make sure that events occur simultaneously for each of the players. My approach at the moment is to broadcast in advance the time at which I want the event to take place, and rely on the fact that the phoneโ€™s clock is reasonably accurate.

The problem is that I saw some situations where the clock is not accurate - it can be for a couple of seconds. So, what I would like to do is evaluate how much the phoneโ€™s operating time on the server differs. Of course, you need to consider network latency, especially since the only network protocol available to me is Http.

So my question is: does anyone know of an algorithm that I can use to estimate the difference in hourly time between client and server, with an accuracy of about 100 ms?

From my days as a Maths student, I seem to remember that there was a statistical model that can be used in this situation, when we select a value that is supposed to consist of a constant plus the sum of the error (delay) which is supposed to follow some distribution. Does anyone know about this and does it really apply?

+4
source share
2 answers

The Christian algorithm (found in the presentation related to aaa answer ) turned out to be what I needed.

On my blog, I have a complete (and rather elegant, if I say so myself) implementation using WCF Rest on the server side and RestSharp and Reactive Framework on the client side.

Here is an excerpt from a blog post explaining the algorithm:

  • The client sends a message to the server: "What time?" [adding: Mr. Wolf is optional]. Itโ€™s significant that it marks the time when it sent the message (call it T sent )
  • The server responds as quickly as possible, providing time in accordance with its clock, T server .
  • When the client receives the message, he notes the time of receipt (call it T received ). He then does some math: rounding time, RTT, T received - T sent . Therefore, assuming that the server responded instantly and that the network latency was the same in both directions, this means that the server actually sent an RTT / 2 second message back. Thus, at the moment when the Client receives the message, the time on the server is T server + RTT / 2. Then the Client can compare his watch and determine the difference - the shift of the clock.
+8
source

Not a statistical model / algorithm, but ... I would do this by recording the time it takes to call the server and get a response.

I would use half of this time (assuming it takes the same amount of time to send a request as an answer) to evaluate any difference. I would transfer half the time value to the server with the actual device time and let the server play out any difference (given this half time offset).

Suppose a second server call (one that has half the time) takes as long as the first (programmed) request. A web farm, load balancing, or uneven server loading can jeopardize this. Make sure your methods for this process do as little as possible to avoid additional delays.

You can try to make several calls and use the average time to account for various requests. Experiment to make sure it's worth it.

It all depends on how accurately you need things. If you really need (close) perfect accuracy, you might be out of luck.

0
source

Source: https://habr.com/ru/post/1340759/


All Articles