I donβt think that Akka and the reactor are apples for apples. The reactor is intentionally minimal, only a couple of external dependencies. It gives you a basic set of tools for writing event-driven applications, but by design does not apply a specific model. In fact, it does not take long to implement the Dynamo system using Reactor components. All that would be necessary is, and most likely, it will only be necessary to write a textbook to show how to combine things.
The Dynamo model that Akka uses is a proven system. Basho made a fantastic implementation at Riak . It's great to see Akku after their leadership in this regard. If we introduce a reactor clustering system, it will probably be the Dynamo model. But since Reactor is basically just event handlers and a pub / sub theme, your users can do whatever remote connection you want. They can integrate with HTTP, AMQP, Redis, anything. There is no need to have special APIs for this kind of thing, because these are just events. You can copy the AMQP client application after about 10 minutes and publish data from RabbitMQ to the Reactor application.
We could very well at some point have different clustering implementations for different purposes. The Dynamo model may work well for some, while others need a simple Redis-based system. Or maybe you can use the components already in Reactor to work with the Java Chronicle to create disk clustering - something you can do right now, just by connecting the right consumers. But these will be external modules that can be added to Reactor. the reaction core itself will probably never have a self-affirming cluster solution simply because it does not meet the purpose of these core components: the basis for JVM event-driven applications.
(I am currently working on the TcpClient / TcpServer wiki docs, so hopefully they will be populated for the M2 Reactor, which will happen very soon.)
source share