Quickly plan method calls in Python

In some part of my project, I need a process planning system that will allow me to delay the execution of the method in a few seconds. I have thousands of "clients" of this system, so using threading.Timerfor every delay is a bad idea because I will quickly reach the limit of the OS flow. I implemented a system that uses only one thread to control synchronization.

The main idea is to save the sorted task (time + func + args + kwargs) and use single threading.Timerto schedule / cancel the execution of the chapter of this queue. This circuit works, but performance does not suit me. ~ 2000 clients who plan fictitious tasks every 10 seconds force the process to take 40% of the processor time. Looking at the profiler output, I see that all the time is spent on a new design threading.Timer, its launch, and, in particular, on the creation of new threads.

I believe there is a better way. Now I'm thinking of rewriting LightTimer, so there will be one thread of execution, managed threading.Event, and several temporary threads that will be set()events. For instance:

  • I am planning a task to call in 10 seconds. The task is added to the queue. Timing thread # 1 starts time.sleep(10)upevent.set()
  • Then I plan the task to call in 11 seconds. The task is added to the queue. Nothing happens with a temporary thread, she will see a new task after waking up.
  • Then I assign the task to be called in 5 seconds. The task is added to the queue. Timeline # 2 starts time.sleep(5)because # 1 is already sleeping for a longer interval.

I hope you caught this idea. What do you think about this? Is there a better way? Maybe I can use some features of the Linux system for an optimal solution?

+3
source share
3 answers

, , time.time() , . , , , . heapq . , 0 .

- . , . , ( 0 ) wait() -, . notify(), , , .

+2

sched Python? ( " ", , - " " , , ) , .

+2

" "; .

, , ( ) , , , .

You can also combine event-driven and multi-process-driven models by running several processes on a machine and maintaining event-driven logic in each of them: for example, a single process can process 2000 clients, you can still run 30-fold processes (provided , which is a sufficient total resource) and increase throughput, especially on modern multi-core hardware.

0
source

Source: https://habr.com/ru/post/1748844/


All Articles