Not a statistical model / algorithm, but ... I would do this by recording the time it takes to call the server and get a response.
I would use half of this time (assuming it takes the same amount of time to send a request as an answer) to evaluate any difference. I would transfer half the time value to the server with the actual device time and let the server play out any difference (given this half time offset).
Suppose a second server call (one that has half the time) takes as long as the first (programmed) request. A web farm, load balancing, or uneven server loading can jeopardize this. Make sure your methods for this process do as little as possible to avoid additional delays.
You can try to make several calls and use the average time to account for various requests. Experiment to make sure it's worth it.
It all depends on how accurately you need things. If you really need (close) perfect accuracy, you might be out of luck.
source share