Difference in measuring SOAP call execution time between server logs and my client

I am writing a client for a specific SOAP API. I get a huge runtime and therefore contacted the owner of the API who told me this:

The average GetPrices call duration, calculated from two sources that store call duration, shows the average duration for 5 days, which a player called our system just above 25 milliseconds, which corresponds to the average number of most players during the same time period. The two sources are Sentry Logs, which calculates the duration of calls between all component applications and the time taken from IISLogs, which includes the delivery time from our API server to the calling computer for players.

During the same 5 days for the same GetPrices call, I averaged 0.08-0.1s, which is 4 times more than the server logs show.

What could be causing such a big difference between my measurements and the measurements of the owner of the API?

The way to measure runtime is very simple:

start_time = time.time()
# GetPrices call
end_time = time.time() - start_time

Please let me know if there is anything else that I could provide.

+4
source share
2 answers

Consider the following table. In short, you are considering the server workload, the server is down.

client                      server
start timing
client get (small data) ->  server receive request
                        <-  server ack request start timing
client receive ack
client waiting response     server workload to create response
                        <-  server response (big data)
client receive response
client ack response     ->  server stop timing
client workload parsing response
client stop timing

If you can start / stop synchronization in ack points , your time differences become lower.

+2
source

, , , , ? , , tcp, , ( ) wirehark. .

- .

0

Source: https://habr.com/ru/post/1676506/


All Articles