Discover Akki Akki in a cluster

Recently, I tried to hush up the concept of Akka and acting systems. While I have a pretty good understanding of the basics of Akka, now Im still struggling with some things when it comes to clusters and remote members.

Try to illustrate the problem using the WebSocket chat example that comes with Play Framework 2.0 : there is an actor that contains WebSockets and that stores a list of currently connected users. Actors basically represent the chat room both technically and logically. This works fine if only one chat is running on the same server.

Now I’m trying to understand how this example should be expanded when we talk about many dynamic chats (you can open / close new rooms at any time) working on a server cluster (with the only nodes that are added or removed according to current demand) . In this case, user A can connect to server 1 when user B connects to server 2. Both can talk in the same chat room. Each server will still have an actor (for each chat room?), Which contains WebSocket instances for receiving and publishing events (messages) for the right users. But logically on one server 1 or server 2 there should be only one chat participant, which contains a list of connected users (or similar tasks).

How are you going to achieve this, preferably in a clean account and without adding an additional messaging system such as ZeroMQ or RabbitMQ?

Here is what I have come up with so far, please let me know if this makes sense:

  • User A connects to server 1 and an actor is allocated that stores his WebSocket.
  • The actor checks (using Router? EventBus? Something else?) Whether there is a "chat actor" for active chat on any of the connected nodes in the cluster. Since he will not do this, he will ask to create a new chat actor somehow and send and receive future chat messages from this actor.
  • User B connects to server 2, and the actor also stands out for his WebSocket.
  • It also checks if there is any object for the requested chat room and finds it on server 1.
  • The chat actor on server 1 now acts as a hub for this chat, sends messages to all "connected" chat participants and distributes incoming messages.

If server 2 goes down, the chat actor needs to be re-created on / moved to server 2 in some way, although this is not my main problem right now. I’m most interested in learning how this dynamic discovery of actors spread across various, mostly independent machines, using Akkas tools.

I looked at the Akkas documentation for quite some time now, so maybe I missed the obvious here. If yes, please enlighten me :-)

+42
akka playframework akka-cluster
Apr 09 2018-12-12T00:
source share
3 answers

I am working on a private project, which is basically a very advanced version of the chat example, and I also had problems launching with an acca and a whole “decentralized” mindset. Therefore, I can tell you how I “solved” my advanced chat:

I need a server that could be easily deployed several times without additional configuration. I use redis as a repository for all open user sessions (simple serialization of their ActorRefs) and for all chats.

The following participants are present on the server:

  • WebsocketSession : it maintains a connection with a single user and processes user requests and forwards messages from the system.
  • ChatroomManager : This is the central broadcaster that is deployed to each server instance. If the user wants to send a message to the chat, WebSocketSession-Actor sends all the information to the ChatroomManager-Actor, which then passes this message to all members of the chat.

So here is my procedure:

  • User A connects to server 1, which allocates a new WebsocketSession. This actor inserts an absolute path to this actor in redis.
  • User A joins chat room X, which also inserts its absolute path (I use this as a unique identifier for the user's session) in redis (each chat has a set of “connections”)
  • User B connects to server 2 -> redis
  • User B joins chat X -> redis
  • User B sends a message to meeting room X as follows: User B sends his message via Websocket to his session actor, who (after some checks) sends a message to the ChatroomManager actor. This actor actually retrieves the chat list from redis (absolute paths used with the akka actorFor method), and then sends a message to each actor member. Then these session participants write to their websites.

In each ChatroomManager player, I perform ActorRef caching, which gives extra speed. I think this is different from your approach, especially since these ChatroomManagers handle requests for all chats. But having one actor for one chat is the only point of failure that I wanted to avoid. In addition, this will trigger a lot more messages, for example:

  • User A and user B are on server 1.
  • Chatroom X is located on server 2.

If user A wants to talk to user B, both of them will have to chat through the chat actor on server 1.

In addition, I used akka features such as (round-robin) -routers to create multiple instances of the ChatroomManager agent on each system to handle many requests.

I spent a few days setting up the entire akka remote infrastructure in combination with serialization and redis. But now I can create any number of server application instances that use redis to exchange ActorRef there (serialized as absolute paths with ip + port).

This may help you a little further, and I am open to new questions (please, not about my English;).

+11
Jun 02 2018-12-12T00:
source share
— -

The key to scaling across multiple machines is to keep the volatile state as isolated as possible. Although you can use a distributed cache to coordinate state across all nodes, this will provide you with synchronization as well as bottleneck problems when scaling to a large number of nodes. Ideally, there should be one actor who knows about the messages and participants in the chat.

The core of your problem is that the chat room is represented by one actor working on the same machine, or even if such a room exists at all. The trick is to route the requests associated with this chat using an identifier, such as a chat name. Calculate the hash of the name and, depending on the number, select one of your n fields. node will know about current chats and can safely find or create a suitable chat actor for you.

You can take a look at the following blog posts on clustering and scaling in Akka:

http://blog.softmemes.com/2012/06/16/clustered-akka-building-akka-2-2-today-part-1/

http://blog.softmemes.com/2012/06/16/clustered-akka-building-akka-2-2-today-part-2/

+9
Jun 20 2018-12-12T00:
source share

I would use Zookeeper + Norbert to find out which hosts go up and down:

http://www.ibm.com/developerworks/library/j-zookeeper/

Now every node in my chat server farm can know all the hosts in the logical cluster. They will receive a callback when the node goes offline (or goes online). Any node can now save a sorted list of current cluster members, hash the chat identifier and mod according to the size of the list to get an index in the list, which is a node that should host any specific section in the chat. We can add 1 and rephrase to select the second index (it takes a loop until you get a new index) to calculate the second host to store the second copy of the chat for redundancy. On each of the two chat hosts, there is a chat participant who simply passes all the chat messages to each Websocket actor who is a member of the chat.

Now we can send chat messages through both active chat participants with a regular Akka router. The client simply sends the message once, and the router will execute hash mods and send them to the two participants in the remote chats. I would use the twitter snowflake algorithm to generate unique 64-bit identifiers for sent messages. See the Algorithm in the nextId () method of the code at the following link. DatacenterId and workerId can be set using norbert properties to ensure that counter identifiers are not created on different servers:

https://github.com/twitter/snowflake/blob/master/src/main/scala/com/twitter/service/snowflake/IdWorker.scala

Now, two copies of each message will be sent to each endpoint of the client through each of the two active participants in the chat. For each Websocket client client, I would not mask the snowflake identifiers to find out the datacenterId + workerId number sending the message and track the highest chat message number visible from each host in the cluster. Then I would ignore any messages that are no higher than what was already visible on this client for this sender node. This will deduplicate a couple of messages coming through two active chat participants.

So far so good; we would have persistent messages in that if any node dies, we won’t lose one surviving copy of the chats. Messages will be continuously transmitted in the second chat.

Then we need to deal with removing nodes from the cluster or adding them back to the cluster. We will receive a norbert callback in each node to notify us of cluster membership changes. In this callback, we can send an akka message through a custom router with a new list of participants and the current host name. The user router on the current node will see this message and update its status to find out about the new membership in the cluster, to calculate a new pair of nodes to send any given chat traffic. This confirmation of new cluster membership will be sent by the router to all nodes so that each server can track when all servers have caught up with membership changes and are now sending messages correctly.

A persistent chat may remain active after a membership change. In this case, all routers on all nodes will continue to send to it as usual, but will also send a message speculatively to the new chat host. This second chat may not yet rise, but this is not a problem, as the messages will go through the survivor. If the surviving chat is no longer active after a membership change, all routers on all hosts will first be sent to three hosts; survival and two new nodes. The akka death clock mechanism can be used so that all nodes can ultimately see the survival of the remaining chat in order to return to the routing of chat traffic through two hosts.

Then we need to transfer the chat from the remaining server to one or two new hosts, depending on the circumstances. At some point, a fascinating chat actor will receive a message about a new membership in the cluster. It will start by sending a copy of the chat membership to new sites. This message will create a new copy of the chat member with the correct membership on the new sites. If survival is no longer one of the two nodes that must hold the chat, it goes into decommissioning mode. In the disarming mode, it will redirect only messages to new primary and secondary nodes that are not members of the chat. Akka messaging is perfect for this.

Radiating Chat will listen for norbert cluster confirmation messages from each node. In the end, he will see that all the nodes in the cluster have recognized the new membership in the cluster. He then knows that he will no longer receive any further messages for forwarding. Then he can kill himself. Akka Hot Standby is ideal for implementing decommissioning.

So far so good; we have a fault tolerant messaging system that will not lose messages for node failure. At the moment of cluster membership change, we will receive a surge of traffic within the network in order to copy chats to new nodes. We also have a residual flurry of intranode message forwarding to the nodes until all the servers have caught up to which chats the two servers have moved. If we want to expand the system, we can wait until the low point in user traffic and just enable a new node. Chats will be automatically redistributed to new sites.

The above description is based on reading the following article and translating it into akka concepts:

https://www.dropbox.com/s/iihpq9bjcfver07/VLDB-Paper.pdf

+6
Jul 02 '13 at 21:47
source share



All Articles