Immediate and planned business transactions - MSMQ / NServiceBus / MS Sql Service Broker / Windows Task Scheduler?

I have such a scenario, guys -

A typical large-scale CMS application that displays the following business operations:

  • Users immediately post content.
  • Users plan to publish content to be published.
  • An automatic process starts, say, after 10 minutes, collects the planned and ready (just the BO status) and sends it for publication.

This is a simplified description of a real system. Speaking of publication , I mean a complex process consisting of 4 main subsystems: creating call arguments, creating html content by launching another web application (asp.net), committing transactional parties and signaling users about the publication results. Thus, the system loses its ability to run the entire process in one Mega transaction, i.e. This is possible, but not practical in terms of scalability / performance.

There are several options for the subsystems to communicate with each other, for example, using MSMQ, SQL Service Broker, NServiceBus and a chain of simple unidirectional WCF calls (which are currently implemented). So guys, I'm looking for some, possibly more reliable and scalable solution for this processing, because it looks like the system will become more and more busy with the growing number of users and, therefore, more content to be created. Moreover, support for mobile versions is requested by customers.

One of my considerations is the queue of all immediate user requests using MSMQ, where a dedicated WCF service will send them next to the content creation module (web application). And so the scheduled task of Windows will be executed. I cannot understand how these messaging platforms can be useful, but do not wait for them to generate html. It really bothers me.

It is impossible to understand which technology would be best suited for this on my own. Any help, thoughts, experience sharing will help me guys.

+4
source share
3 answers

You can use the capabilities of the saga in NServiceBus to model the publishing process, including part of the planning, as it has the ability to reliably perform actions after a certain delay with virtually no overhead.

If you are happy enough to order user requests in the near future, you can easily use NServiceBus as your API, rather than combining something with the scheduled tasks of MSMQ, WCF, and Windows.

+3
source

If all players are instances of SQL Server (or are supported by such), then Service Broker has a big advantage. As integrated into SQL Engine, this means that all the times when you have to participate in a local distributed transaction between the database and the message store (that is, every time you queue or delete a message) become a normal simple transaction, because a database is a message store. It also provides benefits when you consider backup / restore, since you can have a consistent backup for both data and messages (the database backup contains a backup of all messages), you only need one solution to ensure high availability / recovery: a database solution (clustering, mirroring) is also a HA / DR story for messaging, since the database is a message store. And built-in activation eliminates the need to configure (and, more importantly, monitor and fault tolerance in the event of HA / DR failover) external activation mechanisms. Not to mention that the built-in activation does not have latency, while it can also adapt to spikes, something external planned activation may have problems with achievement (external tasks should combine and balance the frequency of combining with the desired delay and take into account bursts )

+3
source

You definitely need to process business processes.

Your direct requests will translate into command messages. After processing the command, you can publish messages about events that the next step in your process could receive in order to continue the process. Therefore, you do not even need to plan.

I did something like this:

e-mail → content engine → document conversion → workflow engine → indexing → workflow engine

No planning was planned. To do something, you send a command, and when the command processing completes the release and event of the endpoint, and which endpoint will be subscribed to this event, you will receive a copy and know how to proceed. I had a simple table structure tracking the data for the process along with the corresponding status.

For some parts of your system, you may need faster processing. When you come across this, you can set your endpoint service more than once and act as a priority endpoint. As an example. At our document converter endpoint, we would have to convert each email item. Since we received thousands per day, they would line up a bit. This was done in the background, not caring about when it happened. On the other hand, users of the indexing web application need to immediately convert certain documents from the website. Sending to the same endpoint of the conversion resulted in thousands of other conversion requests awaiting ad-hoc conversions. So there was no answer. The solution was to simply install another converter instance (identical binaries) and set up separate queues for the endpoint, and then transfer all the conversion request messages from the web servers to the ad-hoc endpoint. The problem is resolved :)

As a side note: even if you are going with a web service or WCF interface, it might be a good idea to place this behind the service bus endpoint, as the endpoint buys you quite a lot (snooze / poison queue / etc.)

Since you seem to be in the evaluation phase, you can take a look at my FOSS service bus: http://shuttle.codeplex.com/

This is what we used in the described system. If necessary, there is a separate component of the scheduler (although we did not use it in the project).

+3
source

Source: https://habr.com/ru/post/1447969/


All Articles