Stop request programmatically

At work, we occasionally request that it return for such a long time that by the time they are finished, the interface (nginx) has already killed the connection, so the user will not see the result (either if it is good or bad).

The worst part is that the balancer (haproxy) will also kill the connection, and then suppose that the server can process another request, which means that while the server is still processing the old request, a new one comes and fights for resources.

Ideally, servers should only process one request at a time in order to use the connection flow with the ZEO database as much as possible, therefore, simultaneous execution of two requests makes the server even slower, and then one of our monitoring systems rightfully restarts plone all together because the dummy Proves that it sends timeouts.

So, some logic is given (perhaps the reuse of Products.LongRequestLogger , which we already use) is there a way to say that the thread is processing the request in order to stop it?

+6
source share
1 answer

IMHO it is a bad idea to abort a request manually. You somehow interfere with the resolution of the conflict, which is IMHO not very good behavior.

I have several large sites working with 200-200 authors who publish / modify 1000 - 3000 objects per day. Typically, the load is distributed throughout the day, so a longer request is processed within a reasonable time.

For example, in the evening, long-term als requests (30-60s) succeed. there is no reason to interrupt them.

In Plone, we have some classic long queries, such as renaming / moving a large tree, changing permissions, copying a large number of objects. Then usually conflicts occur somewhere in the directory and abort the transaction after 3 attempts.

By canceling a long request, you simply remove some functions from Plone. You might consider adding a condition to the rename / move / copy actions, so there aren’t any more if you have 1000 objects in the container.

What I have tried / done so far:

  • Make long requests shorter (Haha, now I just said, but hard to achieve :-)) β†’ For example, check this batch copy / move patch : it no longer makes the directory / directory for renaming and moving, instead it updates only the necessary indices. We have achieved a lot with this.

  • Queues: I used, for example, redis to queue and handle known long actions. Of course, you need to know in advance what is a potential long request, but I think you already know this in your environment. You can contact the user via email or any flash message if the request is completed.

  • Keep the directory as small as possible, delegate everything to solr / elasticsearch (Removing SearchableText gives you a lot ...)

  • Hardware: I know that sounds silly, but it's often a quick win. Try loading at least all of the catalog objects into RAM. invest a few $ in fast cpu / ssd (general input / output). This is not the way I like it, but it is happening, and in 2016 it may give you some time to solve the problem of long queries.

Future:

  • You probably saw Jim Fultons "ZODB" at ploneconf 2016 . If you can handle the resolution of the conflict at zeoclient, and instead you get an object, but only a state, perhaps the best resolution to the conflict will be.

Ehhh ... At first I just made a comment, but I exceeded the character limit; -)

+1
source

Source: https://habr.com/ru/post/1012300/


All Articles