Is it good practice to use mkdir as a file-based lock on Linux?

I would like to quickly implement some locking in a linux perl program that will be shared between different processes.

So, I used mkdir as an atomic operation that returns 1 if the directory does not exist and 0 if it is. I delete the directory immediately after the critical section.

Now I was told that this is not good practice at all (regardless of language). I think that everything is in order, but I would like to ask your opinion.

edit: to show an example, my code looked something like this:

 while (!mkdir "lock_dir") {wait some time} critical section rmdir "lock_dir" 
+4
source share
2 answers

IMHO this is a very bad practice. What if the perl script that created the lock directory was somehow killed during a critical section? Another perl script, waiting for the dir lock to be removed, will wait forever because it will not be deleted using the script that originally created it. To use secure locking, use the flock () file in the lock file (see Perldoc-fl flock).

+5
source

This is normal until an unexpected failure occurs (for example, a program crash, a power failure) when a directory exists.

After that, the program will never be launched, because the lock is locked forever (provided that the directory is in a permanent file system).

Normally I would use flock with LOCK_EXCL.

Open the file for reading + writing, creating it if it does not exist. Then take an exclusive lock, if this does not work (if you use LOCK_NB), then another process is blocked.

Once you have a lock, you need to leave the file open.

The advantage of this approach is that if the process dies unexpectedly (for example, a crash, it is destroyed, or the machine fails), the lock is automatically released.

+2
source

Source: https://habr.com/ru/post/1369208/


All Articles