In the simplest example, imagine that you serialize to the queue the action you want to perform, and then the serialized version of the arguments:
Q: Delete: User 5
which is great, as your Worker Role can read it and act accordingly. This example does not seem to be a big bonus, since the queue can take as much time as the deletion, but what if you want to do BI for all your users? Your web page or application will never be updated, waiting for a synchronous request for processing. But if you sent this request to the queue:
Q: RunSome2HourProcess: User *
Now you can return that you have queued the operation, and continue your fun journey, while your server, which pulls out of the queue, will be able to process at your leisure.
The larger it is, the better the scaling scenarios. What if the last message was split? Therefore, instead of doing all the users, you did something like:
Q RunSome2HourProcess: User A*-M*
Q RunSome2HourProcess: User N*-Z*
The two working roles of the azure tree will share the load and process your information in parallel.
There are also a number of scenarios related to retries, workflows, asynchronous multi-role communication, etc., but this should give you some reasons why the queue might be good in the right scenarios.
So, in these examples: 1. We placed the actions and identifiers / requests in the queue. The role of the working role from the queue will know how to deal with it.
You can use the queue between the service level and processing levels. You can use the queue between processing levels. You can use the queue inside one layer. Sky is the limit
I usually did 1 turn per operation. So in the example above, I will have a Run2HourProcess queue and just write the parameters. And create a new queue for any other type of process. But they are super flexible, so it depends on the developer.
If this is a long read (i.e. an example of generating thumbnails in samples or the result of slow data processing), you should be as fast as in the database.