I am writing a simple homogeneous cluster application using Akka 2.2.3 and Scala; a particle filtering algorithm in which each node shares data with other members of the cluster at random times. This is currently a research application, not a business critical system.
Currently, each node sends a fixed-size message to a randomly selected node every second. This works, but I have performance issues when scaling (e.g. cloud versus on-premises)
- Nodes can overload sent data
- Nodes can be overloaded with incoming messages from other members of the cluster.
- The network may become a bottleneck
I would like to run an application with cluster sizes on different networks and achieve good performance without manual tuning / monitoring. What simple approaches can be used to adjust the size and frequency of messages to mitigate the above problems?
source share