I am working on an application that is divided into a thin client and part of a server, exchanging over TCP. We often let the server make asynchronous calls(notifications) to the client to report status changes. This avoids the server losing too much time waiting for client confirmation. More importantly, it avoids deadlocks .
Such deadlocks can occur as follows. Assume that the server will send a synchronized status change notification (note that this is a somewhat constructed example). When the client processes the notification, the client must synchronously request information about the server. However, the server cannot answer because it is waiting for an answer to its question.
Now this deadlock can be avoided by sending a notification asynchronously, but this creates another problem. When asynchronous calls run faster than they can be processed, the call queue continues to grow. If this situation persists long enough, the call queue will be completely full (flooded with messages). My question is: what can be done when this happens?
My problem can be summarized as follows. Do I really need to choose between sending notifications without blocking with the threat of flooding the message queue or blocking when sending notifications with the risk of imposing a deadlock ? Is there a trick to avoid flooding the message queue?
Note: To repeat, the server does not fail when sending notifications. They are sent asynchronously.
Note: In my example, I used two messaging processes, but the same problem exists with two communicating threads.