The space occupied by the file cannot be restored until all links to this file disappear. Therefore, any process with an open file will prevent the deletion of the file from disk.
Active tail -f following the file, for example.
If these files need to be deleted on free disk space (for example, because they are very large or there are a lot of them), then the processes surrounding these links will prevent them from being deleted and eventually lead to the disk becoming full.
Change in response to a comment on another answer:
The diagnosis you are reporting is exactly what you expect to see in the situation that Adam and I describe. df reports that a 56G disk is in use, and du reports that only 10G visible in the folder. The discrepancy is due to the fact that 46G files are standing that were deleted from the folder, but cannot be physically deleted from the disk, because some processes contain links to them.
It is enough to just experiment with this: find a file system in which you can play safely and create a huge file. Write a C program that opens the file and goes into an endless loop. Now do the following:
- Run the program
- Check
df output rm file- Check the
df output again - Stop your program
- Check the
df output again
You will see that the result of df does not change after the rm file, but changes after the program stops (thus removing the last link to the file).
If you need even more evidence that this is happening, you can get information from the /proc file system, if you have it. In particular, find the PID of one of the tail -f processes (or other processes that you think might be the reason), and look at the /proc/<pid>/fd directory to see all the files that it opened.
(I do not have * nix at home, so I cannot verify that you will see /proc/<pid>/fd in this situation)
source share