RabbitMQ security construct for declaring queues from the server (and use from the client)

I have a test application (first with RabbitMQ) that runs on partially trusted clients (in that I do not want them to create queues on their own), so I will consider the security permissions of the queues and credentials that clients connect.

For the exchange of messages, one-way broadcasts from the server to the clients are mainly used, and sometimes a request is made from the server to a specific client (by which replies will be sent in the response queue, which is intended for this client, on which the server listens for replies).

Currently, I have a receive function on the server that is viewing the Announcement broadcast from clients:

agentAnnounceListener.Received += (model, ea) => { var body = ea.Body; var props = ea.BasicProperties; var message = Encoding.UTF8.GetString(body); Console.WriteLine( "[{0}] from: {1}. body: {2}", DateTimeOffset.FromUnixTimeMilliseconds(ea.BasicProperties.Timestamp.UnixTime).Date, props.ReplyTo, message); // create return replyTo queue, snipped in next code section }; 

I want to create a return to the topic in the above handler:

 var result = channel.QueueDeclare( queue: ea.BasicProperties.ReplyTo, durable: false, exclusive: false, autoDelete: false, arguments: null); 

As an alternative, I could store the received ads in a database, and run this list on a regular timer and declare a queue for each on each pass.

In both scenarios, this newly created channel will then be used in the future for the server to send requests to the client.

My questions:

1) Is it better to create a response channel on the server when I receive a message from the client, or if I do it from outside (by timer), are there any performance problems for declaring queues that already exist (there may be thousands of endpoints)?

2) If the client starts to miss, is there a way in which they can be downloaded (in the receive function I can find how many messages per minute and download if certain criteria are met)? Are there any other filters that can be defined before receiving in the pipeline to remove clients that send too many messages?

3) In the example above, please note that my messages are constantly coming in every run (the same old messages), how can I clear them, please?

+5
source share
2 answers

Here are some general architecture / reliability ideas for your scenario. The answers to your 3 specific questions are at the end.

General architecture ideas

I'm not sure that the declare-response-queues-on-server approach provides performance / stability benefits; you will need to compare this. I think the simplest topology to achieve what you want is the following:

  • Each client, when it connects, declares an anonymous exclusive and / or autodelete . If the client’s network connection is so sketchy that an undesirable open direct connection is undesirable, then something similar to Alex suggested the “Web application” above, and the clients got to the endpoint declaring an exclusive / autonomous queue on their behalf and closes the connection ( automatically deleting the queue when the consumer leaves) when the client does not receive a sufficient number of messages. This should only be done if you cannot configure the RabbitMQ bits from clients to work in an unreliable network, or if you can prove that you need a queue creation rate that limits the level of web applications.
  • Each client queue is tied to the exchange of broadcast topics that the server uses to transmit broadcast messages (routing wildcard key) or specially designed messages (routing key that corresponds to only one client queue name).
  • When the server needs to receive a response from clients, you can either declare a response queue before sending a “response” message, and encode the response queue in the message (basically what you are now), or you can create semantics in your clients in which they stop consuming from your broadcast queue for a fixed amount of time, before you try to use exclusive (mutex) again, publish your responses in your own queue and ensure that the server consumes these responses for the allotted time, etc. Once you close the server and restore normal translation semantics. This second approach is much more complicated and probably not worth it.

RabbitMQ customer dissatisfaction prevention

Things that can reduce server load and help prevent clients. DoSing your server with RMQ operations includes:

  • Setting the appropriate minimum thresholds for the maximum length for all queues, so the number of messages stored on the server will never exceed a certain number of clients.
  • Queuing outputs for queues or for outgoing messages to ensure that outdated messages do not accumulate.
  • Speed-limited RabbitMQ operations are quite complicated, but you can limit the speed at the TCP level (using, for example, HAProxy or other router / proxy stacks) so that your clients do not send too much data or open too many connections at the same time. In my experience (only one data point, if in doubt, a benchmark!) RabbitMQ cares less about the number of messages to be swallowed at a time than the amount of data and the maximum possible size of the entered message. Many small posts are usually fine; a few huge ones can cause latent outbursts, otherwise byte speed limits at the TCP level will probably allow you to scale such a system very far before you have to reevaluate it.

Specific answers

In light of the above, my answers to your specific questions:

Q: Should you create response queues on the server in response to received messages?

A: Yes, perhaps . If you are worried about the speed of creating a queue, what happens as a result of this, you can limit the speed per server instance. It looks like you are using Node, so you should be able to use one of the existing solutions for this platform to have one speed limiter for creating a queue for an instance of a node server, which, if you do not have many thousands of servers (and not clients), should allow you reach a very large scale before revaluation.

Q: Are there performance implications for queuing based on client actions? Or re-announcement of the queues?

A: Test and see! Re-declares are probably OK; If you correctly estimate the limit, you may not need to worry about it at all. In my experience, floods of queue-declare events can cause a delay in order to rise a bit, but not crash the server. But this is only my experience! Each scenario / deployment is different, so there is no alternative to benchmarking. In this case, you must run the publisher / consumer with a constant flow of messages, such as tracking. publish / confirm latency or delay in receiving messages, download / use rabbitmq resources, etc. Although a number of publish / consume pairs have been launched, declare a lot of queues in high parallel and see what happens to your metrics. Also, in my experience, redefining queues (idempotent) does not cause many significant bursts of load. More important for viewing is the speed of establishing new connections / channels. You can also quickly evaluate queuing rates based on each server (see My answer to the first question), so I think that if you implement this correctly, you won’t have to worry about it for a long time. Whether the performance of RabbitMQ would suffer as a function of the number of queues that exist (as opposed to the announcement rate) would be another problem for comparison.

Q: Can you kick customers based on misconduct? Message frequencies?

A: Yes, although it is a little difficult to set up , it can be done in at least a somewhat elegant way. You have two options:

There is only one option: what you proposed: monitor the message speeds on your server, how you do it, and click on clients. This has coordination problems if you have multiple servers and you need to write code that lives in your message receiving cycles and does not shut down until RabbitMQ sends messages to your server’s consumers. These are all significant disadvantages.

Option Two: Use the maximum length and dead letter exchange to create the kick bad clients agent. Length limits on RabbitMQ queues tell the queue system "if there are more messages in the queue than X, drop them or send them to the dead letter exchange (if configured)." Dead letter exchange allows you to send messages that are longer than the length (or correspond to other conditions) for a particular queue / exchange. Here you can combine them to find clients that publish messages too fast (faster than your server can use them) and kick clients:

  • Each client announces the main $clientID_to_server maximum length of a certain number, for example X , which should never accumulate in the queue if the client has not "overtaken" the server. In this queue there is a messaging with the dead letter ratelimit or some constant name.
  • Each client also declares / owns a $clientID_overwhelm with a maximum length of 1. This queue is tied to the ratelimit exchange with the routing key $clientID_to_server . This means that when messages are published to the $clientID_to_server at too high a speed so that the server does not lag, messages will be redirected to $clientID_overwhelm , but only one will be stored (so you will not fill RabbitMQ and only ever store X+1 messages X+1 for each client).
  • You start a simple agent / service that detects (for example, through the RabbitMQ Management API) all connected client identifiers and consumes (using only one connection) from all its *_overwhelm queues. Whenever he receives a message in this connection, he gets the client identifier from the routing key of this message, and then kicks that client (either by doing something out of range in your application, removing this client $clientID_to_server and $clientID_overwhelm queues, thereby causing an error the next time the client tries to do something or closing this client connection with RabbitMQ using the /connections endpoint in the RabbitMQ management API - this is pretty intrusive and should only be done if you really need to). This service should be fairly easy to write, as it does not need to coordinate the state with any other parts of your system except RabbitMQ. However, you will lose some messages from incorrect clients with this solution: if you need to save all of them, remove the maximum length limit in the overflow queue (and risk filling RabbitMQ).

Using this approach, you can detect spam clients as they occur according to RabbitMQ, and not just how they occur according to your server. You can expand it by adding a TTL . Message to messages sent by clients and triggering dead letter behavior if messages are in the queue for more than a certain time - this will change the pseudo-flow restriction from "when the server consumer lags the number of messages" to ", when the server consumer is behind the message delivery timestamp. "

Q: Why are messages updated in every run, and how can I get rid of them?

A: use confirmation or noack (but probably confirmation). Receiving a message in “receive” simply brings it to your consumer, but does not push it out of the queue. This is similar to a database transaction: in order to finally pull it out, you have to confirm it after receiving it. Most likely, you can start your consumer in "noack" mode, which will make the reception behavior work as you expected. Nevertheless, it should be warned that the noack mode makes a big compromise: since RabbitMQ delivers messages to your external subscriber (basically: even if your server is locked or asleep if it issues consume , the rabbit pushes messages on it) if you consume it in unmanned mode, these messages are permanently deleted from RabbitMQ when it pushes them to the server, so if the server crashes or shuts down before merging its "local queue" with any waiting messages, these messages will be lost forever. Be careful with this if it is important that you do not lose messages.

+1
source

I think that preventing customers from creating queues simply complicates the design without any significant security benefits. You allow clients to create messages. In RabbitMQ, it’s not so easy to stop clients from flooding your server with messages.

If you want to limit the speed of your customers, RabbitMQ may not be the best choice. It automatically limits the speed when the servers begin to struggle with the processing of all messages, but you cannot set a strict speed limit for each message, a client on the server, using a ready-made solution. In addition, clients are usually allowed to create queues.

Approach 1 - Web Application

Maybe you should try using a web application:

  • Clients authenticate on your server
  • To announce, clients send a POST request to a specific endpoint, i.e. /api/announce may provide some credentials that allow them to do this
  • To receive incoming GET /api/messages
  • To confirm the processed message: POST /api/acknowledge

When the client confirms receipt, you delete your message from the database.

With this design, you can write your own logic to limit bids or ban customers who behave badly and you have full control over your server.

Approach 2 - RabbitMQ Management API

If you still want to use RabbitMQ, you can achieve what you want using the RabbitMQ Management API

You will need to write an application that will request the RabbitMQ Management API on a timer and:

Get all current connections and check the message speed for each of them.

If the message speed exceeds the threshold value, close the connection or revoke user rights using the endpoint /api/permissions/vhost/user .

In my opinion, a web application can be simpler if you do not need all the functions of the queues, such as work queues or complex routing, which you can get out of the box using RabbitMQ.

+3
source

Source: https://habr.com/ru/post/1274684/


All Articles