Check the open FD limit for this process on Linux

I recently had a Linux process that "leaked" into file descriptors: it opened them and did not close some of them.

If I controlled this, I could say - in advance - that the process has reached its limit.

Is there a good way for Bash \ Python to check the FD utilization rate for a given process on an Ubuntu Linux system?

EDIT:

Now I know how to check how many open file descriptors are; I need to know how many file descriptors are allowed for a process . Some systems (e.g. Amazon EC2) do not have the /proc/pid/limits file.

Thank,

Udi

+48
linux scripting operating-system limit file-descriptor
Aug 31 '09 at 9:39
source share
6 answers

Count the entries in /proc/<pid>/fd/ . Hard and soft restrictions applied to the process can be found in /proc/<pid>/limits .

+91
Aug 31 '09 at 12:34
source share

The only interfaces provided by the Linux kernel to obtain resource limits are getrlimit() and /proc/ PID /limits . getrlimit() can only get the resource limits of the calling process. /proc/ pid /limits allows you to get the resource limits of any process with the same user ID and is available in RHEL 5.2, RHEL 4.7, Ubuntu 9.04 and any distribution with a kernel of 2.6.24 or later.

If you need to support older Linux systems, you will need to make the process itself call getrlimit() . Of course, the easiest way to do this is to change the program or library that it uses. If you run the program, you can use LD_PRELOAD to load your code into the program. If none of them is possible, you can connect to the process using gdb and execute it as part of the process. You can also do the same, using ptrace() to join the process, insert the call into your memory, etc., However, it is very difficult to do it right and is not recommended.

With appropriate privileges, other ways to do this would be to scan the kernel memory, load the kernel module, or some other modification of the kernel, but I assume that is out of the question.

+31
Sep 06 '09 at 0:45
source share

You can try to write lsof -p {PID} that periodically call lsof -p {PID} on the given pid.

+2
Aug 31 '09 at 9:41
source share

You requested bash / python methods. ulimit would be a better bash approach (except for munging through /proc/$pid/fd and the like). For python, you can use the resource module.

 import resource print(resource.getrlimit(resource.RLIMIT_NOFILE)) 
 $ python test.py (1024, 65536) 

resource.getrlimit corresponds to calling getrlimit in a C program. The results represent the current and maximum values ​​for the requested resource. In the above example, the current (soft) limit is 1024. Currently, the values ​​are standard defaults for Linux systems.

+2
Sep 05 '09 at 13:02
source share

to see the handle to the top handle 20 using processes:

 for x in `ps -eF| awk '{ print $2 }'`;do echo `ls /proc/$x/fd 2> /dev/null | wc -l` $x `cat /proc/$x/cmdline 2> /dev/null`;done | sort -n -r | head -n 20 

the output is in the handle count, pid, cmndline file for the process

Output example

 701 1216 /sbin/rsyslogd-n-c5 169 11835 postgres: spaceuser spaceschema [local] idle 164 13621 postgres: spaceuser spaceschema [local] idle 161 13622 postgres: spaceuser spaceschema [local] idle 161 13618 postgres: spaceuser spaceschema [local] idle 
+1
Jun 21 '13 at 14:56
source share

On CentOS 6 and below (anything using GCC 3), you may find that setting kernel restrictions does not fix the problem. This is because there is a FD_SETSIZE value set at compile time used by GCC. To do this, you will need to increase the value and then recompile the process.

In addition, you may find that you are missing file descriptors due to known issues in libpthread if you use this library. This call was integrated into GCC in GCC 4 / CentOS7 / RHEL 7, and it seems to fix the threading issues.

0
Mar 29 '17 at 20:05
source share



All Articles