WCF chat functionality, two-way callbacks and polling?

I use WCF and I put the chat room in my C# program. Therefore, I need to be able to send information from the server to clients on two events -

  • When a user connects / disconnects, I update the list of connected users and send them to all clients for display in TextBlock
  • When a user sends a message, I need the server to send this message to all clients

Therefore, I am looking for advice on the best way to implement it. I was going to use netTcpBinding for duplex callbacks for clients, but then I ran into some problems regarding the fact that you cannot call the client back if the connection is closed. I need to use percall instances for scalability. I was informed in this thread that I should not leave communications open, as this "significantly limits scalability" - WCF duplex callbacks, how to send a message to all clients?

However, I looked at the book “Programming WCF Services,” and the author seems to state that this is not a problem, because “between calls, the client contains a link to a proxy server that does not have an actual object at the end of the wire. This means that you can get rid of the expensive resources that an instance of a service takes long before the client closes the proxy server "

  • So, what’s right, is it good to keep proxies open on clients?
  • But even if it’s good, it leads to another problem. If service instances are destroyed between calls, how can they perform duplex callbacks to update clients? Regarding percall instances, WCF Services author says: "Since the object will be discarded after the method returns, you should not unscrew the background threads or send asynchronous calls back to the instance"
  • Would it be better if customers polled the service for updates? I would suggest that this is much more inefficient than two-way callbacks, clients can end up polling the service 50 times more often than using two-way callbacks. But maybe there is no other way? Will it be scalable? I have envisioned several hundred concurrent users.
+4
source share
1 answer

Since I am to blame for saying that server callbacks will not scale, I should probably explain a little more. Let me start with your questions:

  • Without ownership of the book in question, I can only assume that the author either refers only to HTTP-based transports or request-response, but does not have callbacks. Callbacks require one of two tasks: either the server must support an open TCP connection with the client (which means that each server has resources used on the server), or the server must be able to open a connection with the listening port on the client. Since you are using netTcpBinding, your situation will be the first. wsDualHttpBinding is an example of the latter, but it leads to many routing and firewall problems that make it unusable over the Internet (I assume the public Internet is your target environment here, if not, let us know).

  • You intuitively understood why server resources are needed for callbacks. Again, wsDualHttpBinding is a little different, because in this case the server actually accesses the client over the new connection to send an asynchronous response. This basically requires the ports to be open on the client side and make their way through any firewalls, which you cannot expect from an average Internet user. There's a lot more here: WSDualHttpBinding for two-way callbacks

  • You can architect this in several different ways, but this is understandable if you do not want the overhead (and the possibility of delay) of the clients constantly clogging the server for updates. Again, with a few hundred concurrent users, you are probably still within the range that one good server can handle with callbacks, but I assume you would like to have a system that can scale outside (or in peak times). I would do this:

    • Use callback proxies (I know, I told you not to do this) ... Clients connecting the creation of new proxies that are stored in a thread-safe collection and sometimes checked for live (and cleaned up if found to be dead).

    • Instead of sending messages from the server from one client to another, the server sends messages to Message Queue Middleware . There are many of them - MSMQ is popular on Windows, ActiveMQ and RabbitMQ are FOSS (free open source software) and Tibco EMS is popular in large enterprises (but can be very expensive). What you probably want to use is a topic, not a queue (more on queues and topics here ).

    • You have a thread (or several threads) on the server that is designed to read messages that are disconnected from the topic, and if this message is addressed to a live session on this server, deliver this proxy message to the server.

Here is a rough sketch of the architecture:

Queue-backed Chat Architecture

This architecture should allow you to automatically scale by simply adding more servers and balancing new connections between them. The message queuing infrastructure will be the only limiting factor, and all those that I talked about will depend on any probable use case you have ever seen. Since you will use topics, rather than queues, each message will be broadcast to each server - you may need to figure out the best way to distribute messages, for example, use hash splitting.

+1
source

Source: https://habr.com/ru/post/1387106/


All Articles