I am working with a Service Fabric application, which I am not quite able to execute, as well as hope.
The main problem is that one actor calls another. I record how long this call takes the form of the calling actor, and I record the time spent on the receiving participant.
I see that the receiving participant registers that the workload takes several milliseconds (maximum 20). However, the caller logs anything from 50 ms to more than 2 seconds. A delay that I cannot take into account precedes valid logic. As soon as the method returns, the caller quickly receives a response.
Can this be expected? This is definitely worse when you create a completely new instance of an actor - but I see this, even when I call the actor, I made another call to the point earlier.
The past parameters are pretty simple - I don’t suspect that the problem is deserialization.
I understand that the actors will be distributed within the cluster, but the overhead on this scale seems disproportionate.
So my question is: is this “as expected” or indicates that we are doing something wrong?
I will add that this is in a quiet test environment, so actors blocked by other requests are not a problem.
I can provide additional information upon request, but I'm not quite sure what might be most relevant.
source
share