How would you implement a basic event loop?

If you have worked with gui tools, you know that there is a loop-loop / main loop that should be executed after everything is done, and this will support the application and respond to various events. For example, for Qt, you will do this in main ():

int main() { QApplication app(argc, argv); // init code return app.exec(); } 

In this case, app.exec () is the main loop of the application.

The obvious way to implement this type of loop would be:

 void exec() { while (1) { process_events(); // create a thread for each new event (possibly?) } } 

But this closes the processor to 100% and is practically useless. Now, how can I implement such an event loop that responds without consuming a processor at all?

Answers are welcome in Python and / or C ++. Thank.

Footnote: I will use my own signals / slots for training, and I will use them to create custom events (e.g. go_forward_event(steps) ). But if you know how I can use system events manually, I would also like to know about it.

+47
c ++ python event-loop blocking
Mar 18 '09 at 14:12
source share
4 answers

I often wondered the same thing!

The main GUI loop looks like this: in pseudocode:

 void App::exec() { for(;;) { vector<Waitable> waitables; waitables.push_back(m_networkSocket); waitables.push_back(m_xConnection); waitables.push_back(m_globalTimer); Waitable* whatHappened = System::waitOnAll(waitables); switch(whatHappened) { case &m_networkSocket: readAndDispatchNetworkEvent(); break; case &m_xConnection: readAndDispatchGuiEvent(); break; case &m_globalTimer: readAndDispatchTimerEvent(); break; } } } 

What is Pending? Well, it depends on the system. On UNIX, it is called a โ€œfile descriptor,โ€ and โ€œwaitOnAllโ€ is called a :: select system call. The so-called vector<Waitable> is ::fd_set on UNIX, and whatHappened is actually requested through FD_ISSET . Actual handlers are obtained in different ways, for example, m_xConnection can be taken from :: XConnectionNumber (). X11 also provides a high-level portable API for this - :: XNextEvent () - but if you used this, you would not be able to wait for multiple event sources at the same time.

How does blocking work? "waitOnAll" is a system call that tells the OS that your process is moving to a "sleep list". This means that you are not given any processor time until the event occurs on one of those waiting. This means that your process is idle, consuming 0% CPU. When an event occurs, your process will briefly respond to it, and then return to the waiting state. GUI applications spend almost all of their time idling.

What happens to all processor cycles during sleep? It depends. Sometimes a different process will be useful to them. If not, your OS will be busy with the processor cycle or put it into temporary mode with low power consumption, etc.

Please request more information!

+61
Mar 18 '09 at 14:29
source share

Python:

You can look at the implementation of a twisted reactor , which is probably the best implementation of the event loop in python. The reactors in Twisted are implementations of the interface, and you can specify the type of reactor to start: select, epoll, kqueue (all based on ac api using these system calls), there are also reactors based on the QT and GTK sets.

A simple implementation would be to use select:

 #echo server that accepts multiple client connections without forking threads import select import socket import sys host = '' port = 50000 backlog = 5 size = 1024 server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind((host,port)) server.listen(backlog) input = [server,sys.stdin] running = 1 #the eventloop running while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == server: # handle the server socket client, address = server.accept() input.append(client) elif s == sys.stdin: # handle standard input junk = sys.stdin.readline() running = 0 else: # handle all other sockets data = s.recv(size) if data: s.send(data) else: s.close() input.remove(s) server.close() 
+21
Mar 19 '09 at 1:45
source share

In general, I would do this with a semaphore count :

  • The semaphore starts from scratch.
  • A series of events awaits a semaphore.
  • Event (s) arrive, the semaphore increases.
  • The event handler unlocks and reduces the semaphore and processes the event.
  • When all events are processed, the semaphore is zero and blocks the cycle of the event loop.

If you do not want to complicate the work, you can simply add a call to sleep () in a while loop with minimally low sleep time. This will cause the message processing thread to give it processor time for other threads. The CPU will not be tied 100% more, but it is still pretty wasteful.

+11
Mar 18 '09 at 14:23
source share

I would use the simple, lightweight ZeroMQ message library ( http://www.zeromq.org/ ). This is an open source library (LGPL). This is a very small library; on my server the whole project is about 60 seconds.

ZeroMQ will greatly simplify your event-driven code, and it is also the most efficient solution in terms of performance. Communication between threads using ZeroMQ is much faster (in terms of speed) than using semaphores or local UNIX sockets. ZeroMQ is also 100% portable, while all other solutions tie your code to a specific operating system.

+9
Mar 18 '09 at 16:37
source share



All Articles