Python - ensure that the script is activated only once

I am writing a Python 2.7 script.
Thus, this script runs every night on Linux and activates several processes.

I would like to ensure that this script does not run several times in parallel (basically trying to mimic the Singleton pattern, but at the application level).

Code example

 def main(): # before doing anything, I'd like to know whether this # script was activated and alive. # if so, error out # do something if __name__ == "__main__": main() 

Sentence

A naive solution would be to create some kind of lock file that acts like a mutex.
The first thing we do is check if this file exists. if so, then another instance of the script has already created it, and we must make a mistake. when the script is executed, we delete this file.
I assume that this solution will work as long as file system operations are atomic.

Implementation

 import os, sys lock_file_path = ".lock_script" def lock_mutex(): if os.path.exists(lock_mutex_path): print "Error: script was already activated." sys.exit(-1) else: file = open(lock_mutex_path, 'w') def unlock_mutex(): assert( os.path.exists(lock_mutex_path)) os.remove(lock_mutex_path) def main(): try: lock_mutex() # do something unlock_mutex() except: unlock_mutex() if __name__ == "__main__": main() 

Problem

How to provide lock_mutex() and unlock_mutex() atomic?

+6
source share
2 answers

I use a supervisor ( http://supervisord.org/ ) to run files under Linux. It launches Django, Celeryd, etc. And ensures that they will be restarted if they unexpectedly end.

But it is also possible to set parameters so that the commands do not start or restart automatically at the end: autostart = false, autorestart = false, starseconds = 0. I use this for these cron jobs.

In cron I put the "supervisorctl start myscript" command, which does nothing if myscript is already running under the supervisor and otherwise starts it.

It works fine, regardless of the language the script is written in.

+1
source

Since you are using linux, you can use flock :

 import os import fcntl import time def main(): # acquire the prog lock if not prog_lock_acq('singleton.lock'): print("another instance is running") exit(1) print("program is running-press Ctrl+C to stop") while True: time.sleep(10) def prog_lock_acq(lpath): fd = None try: fd = os.open(lpath, os.O_CREAT) fcntl.flock(fd, fcntl.LOCK_NB | fcntl.LOCK_EX) return True except (OSError, IOError): if fd: os.close(fd) return False if __name__ == '__main__': main() 

It does not matter that we left the file open after exiting prog_lock_acq , because when the process ends, it automatically closes the OS. In addition, if you do not specify the LOCK_NB option, the flock call will be blocked until the current current process completes. Depending on your use case, this may be helpful.

Please note that we do not delete the file on exit. It does not matter. The existence of the file does not indicate a living process - blocking. That way, even if you kill your process with kill -9 , the lock will still be released.

However, there is a caveat: if you disconnect the lock file during the execution of the process, when the next instance of the process is launched, it will create a new file that will not block it and will work just fine, breaking our singleton design. You could do something smart using a directory to prevent unlocking, but I'm not sure how cool that would be.

+3
source

Source: https://habr.com/ru/post/957278/


All Articles