I have a Python application (GUI using PyQt4) that is generated by the user in multiple instances. The application is used to perform some lengthy tasks (from several hours to several days), so Iām ready to add an additional monitoring application that will do such things as:
- find all running processes of another application
- get status of performed operations (completed tasks, percentage, error messages, ...)
- eventually send some commands to applications telling them to pause, resume, stop, ...
One way that would fit the job is RPyC , the only problem seems to work only through TCP sockets, like most of the RPC libraries I found. But this leads to the discovery of several unnecessary sockets listened only on localhost, and the need to create some kind of port allocation mechanism to avoid two processes trying to listen on the same port. And then the monitor needs a list of ports that need to be written somewhere, or search and find processes listening on TCP ports, and try to find out if they are instances of the correct application or ot. Sounds like a mess.
The most enjoyable way to manage the interaction that I could think of at the moment is to have some unix sockets, say in '/var/run/myapp/myapp-.sock', and then create a module that will do all the dirty stuff exposing some methods, such as listMyApps()and getMyApp(pid), returns the first list of pids, the second is an object that can be used to communicate with this application.
Now I look at the best way to achieve this. In fact, there is nothing that has already been done to control RPC on UNIX sockets? It sounds a little strange to me, but I could not find anything that could fit.
Any suggestions?
.. , ( ), - , .
. , , :), .