Is there a good trick for the server to handle more requests if I don't need to send any data?

I want to process a lot (> 100 thousand / sec) of POST requests from javascript clients with some kind of service server. Not many of these data will be saved, but I have to process them all, so I can not spend all my server power on request processing. All processing should be performed in one instance of the server, otherwise I will need to use a database for synchronization between servers, which will be slower by orders of magnitude.

However, I do not need to send any data to clients, and they do not even expect them. So far, I have planned to create several instances of proxy servers that will be able to buffer the request and send them to the main server in large packets.

For example, let's say that I need to process 200k requests / sec, and each server can process 40k. I can share the load between 5 of them. Then each of them will buffer requests and send them back to the main server in packets of 100. This will lead to 2k requests / sec on the main server (however, each message will be 100 times larger, which probably means about 100-200 kB) I could even send them back to the server using UDP to reduce the amount of resources needed (then I only need one socket on the main server, right?).

I just think there is no other way to speed this up. Especially when, as I said, I do not need to send anything. I have full control over javascript clients, but unsuccessful javascript cannot send data using UDP, which will probably be the solution for me (I don't care if 0.1% of the data is lost).

Any ideas?


Edit in response to my answers so far.

The problem is not that the server slows down when processing events from the queue or when events are queued. In fact, I plan to use the disruptor pattern ( http://code.google.com/p/disruptor/ ), which, as it turned out, processes up to 6 million requests per second.

The only problem I can potentially have is the simultaneous opening of 100, 200, or 300 thousand sockets that cannot be processed by any of the main servers. I know some custom solutions are possible ( http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3 ), but I wonder if there is a way to improve the use of fact , 't have to replay with customers.

(For example, some way to embed some of the data in the original TCP packet and process the TCP packets, since they will be UDP. Or some other kind of magic;))

+4
source share
4 answers

Create a unique and fast (possibly in C) function that receives all requests from a very fast server (e.g. nginx). The only task of this function is to store requests in a very fast queue (for example, redis, if you have enough bar).

In another process (or server), drop the queue and do the real work by processing the request one by one.

+1
source

If you have control over clients, as you say, then your proxy server does not even have to be an HTTP server, because you can assume that all requests are valid.

You can implement it as a non-HTTP server that just sends 200 back, reads the client request until it disconnects, and then ends the processing requests.

+1
source

I think you are describing the implementation of Message Queue . You will also need something to pass these requests to any queue you use ( RabbitMQ is good, there are many alternatives).

You will also need something else working that can do whatever processing you really want in the queries. You did not do this very clearly, so I don’t know exactly what would be right for you. Essentially, the idea is that incoming requests are thrown as soon as possible into the queue by your web server, and then the web server can return to serving more requests. When the system has some resources, it uses them to process the queue, but when it is busy, the queue simply continues to grow.

Not sure which platform you're on, but you might want to see something like Lighttpd for POST service. You can (if restrictions on one domain do not bother you), avoid using Lighttpd in the subdomain of your application (so post.myapp.com). Otherwise, you can place the proper load balancer in front of the web servers (so that all requests go to www.myapp.com and the load balancer decides whether to forward them to the web server or queue processor).

Hope that helps

0
source

Consider using MongoDB to save your queries, the Fire and Forget mechanism can help your servers respond faster.

0
source

Source: https://habr.com/ru/post/1392372/


All Articles