≡ Menu

Linux: Find Out How Many File Descriptors Are Being Used

While administrating a box, you may wanted to find out what a processes is doing and find out how many file descriptors (fd) are being used. You will surprised to find out that process does open all sort of files:
=> Actual log file

=> /dev files

=> UNIX Sockets

=> Network sockets

=> Library files /lib /lib64

=> Executables and other programs etc

In this quick post, I will explain how to to count how many file descriptors are currently in use on your Linux server system.

Step # 1 Find Out PID

To find out PID for mysqld process, enter:
# ps aux | grep mysqld
# pidof mysqld


Step # 2 List File Opened By a PID # 28290

Use the lsof command or /proc/$PID/ file system to display open fds (file descriptors), run:
# lsof -p 28290
# lsof -a -p 28290

# cd /proc/28290/fd
# ls -l | less

You can count open file, enter:
# ls -l | wc -l

Tip: Count All Open File Handles

To count the number of open file handles of any sort, type the following command:
# lsof | wc -l
Sample outputs:


List File Descriptors in Kernel Memory

Type the following command:
# sysctl fs.file-nr
Sample outputs:

fs.file-nr = 1020	0	70000


  1. 1020 The number of allocated file handles.
  2. 0 The number of unused-but-allocated file handles.
  3. 70000 The system-wide maximum number of file handles.

You can use the following to find out or set the system-wide maximum number of file handles:
# sysctl fs.file-max
Sample outputs:

fs.file-max = 70000

See how to set the system-wide maximum number of file handles under Linux for more information.

More about /proc/PID/file & procfs File System

/proc (or procfs) is a pseudo-file system that it is dynamically generated after each reboot. It is used to access kernel information. procfs is also used by Solaris, BSD, AIX and other UNIX like operating systems. Now, you know how many file descriptors are being used by a process. You will find more interesting stuff in /proc/$PID/file directory:

  • /proc/PID/cmdline : process arguments
  • /proc/PID/cwd : process current working directory (symlink)
  • /proc/PID/exe : path to actual process executable file (symlink)
  • /proc/PID/environ : environment used by process
  • /proc/PID/root : the root path as seen by the process. For most processes this will be a link to / unless the process is running in a chroot jail.
  • /proc/PID/status : basic information about a process including its run state and memory usage.
  • /proc/PID/task : hard links to any tasks that have been started by this (the parent) process.

See also: /proc related FAQ/Tips

/proc is an essentials file system for sys-admin work. Just browser through our previous article to get more information about /proc file system:

Share this on:

{ 14 comments… add one }

  • raj August 21, 2007, 8:35 pm

    hey thanks for quick n dirty procfs tutorial :)

  • Bogdan March 28, 2008, 4:00 pm

    thx, that was really useful!quick question: anybody has an idea why the value displayed by ‘lsof -p {procid} | wc -l’ is different to that of ‘ls -l /proc/{procid}/fd | wc -l’. lsof is usually higher…?

    • Matt August 25, 2011, 4:22 am

      Because lsof counts files in /proc//fd and adds the opend shared libs from /proc//maps

  • Jimi February 3, 2009, 7:04 pm

    Thanks man, this was really useful.

  • Julien February 4, 2009, 10:12 pm

    Thank you. I had a daemon that stopped functioning correctly six hours after I started it. Turns out, I had a file descriptor leak. It had never even crossed my mind until I put 2 and 2 together. This confirmed it. It’s all fixed now, thanks to you!

  • kunal February 18, 2009, 8:56 am

    Can’t we use lsof | wc -l for the same

  • apoc February 22, 2010, 8:26 am

    you can combine lsof with pidof:

    lsof -p `pidof ruby`

  • Matt June 7, 2010, 3:06 pm

    I Have a files in /proc//fd which should be in a different pid fd folder! File Descriptor Leak! Closed files from another process are duplicating themselves (random number of duplicate content 5-75 times) into another process’s fd folder! Has anybody seen this before. Solaris 10.

  • SparK August 11, 2010, 7:52 am

    Yeah you can combine them all!
    below one gives all type of descriptor opened by “mysqld” process:

    lsof -p `pidof mysqld` |wc -l

    There is a way to set system level “max open file descriptors” and view current count of “open file descriptors” at system-level;
    I’m unable to recollect :(
    does anyone know it?

    • avajadi August 26, 2010, 2:51 pm

      cat /proc/sys/fs/file-nr shows number of open files in the first column, max allowed open files in third column and 0 in the second column.

      It doesn’t update the first column on a single file handle basis, but seems to increase and decrease in steps

  • Adev February 16, 2013, 8:36 am

    I have a new unused server here where I’m trying to install/use nginx for php for the first time.

    Strange error for unused server?
    Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

    Anyhow, I followed the instructions and now I get this error:
    2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
    2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:, server: localhost, request: “GET /info.php HTTP/1.0”, upstream: “”, host: “”

    I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

    Any tips?

  • Vincent December 10, 2013, 1:43 pm

    Just a quick remark: if it’s just for counting, like in

    lsof | wc -l

    then it is much faster not to resolve user ids and network adresses:

    lsof -n -l | wc -l

  • sorin March 6, 2014, 10:01 pm

    I am quite interested about the subject but I am wondering which is the proper way to monitor these. I am considering adding a metric so I need a current value and a maximum one.

    Kinda strange but I do think that “lsof -n -l | wc -l” is measuring too much, I get ~145.000 on one server and ~10.000 on another one, and these are not under heavy load. After dumping the file I observer measuring TCP connections too, which I would prefer not to count them here.

  • Kael April 9, 2014, 4:33 pm


    That’s because it is. Apparently, if a process opens a file handle, then forks, each of the processes will have the same file handle to the opened file, but it only counts as one file handle with respect to file handle limits, so lsof ends up double-counting it.

    Try this:
    lsof | grep / | sort -k9 -u | wc -l
    It will do the same thing, but sort on the name column, and discard duplicates. It means each of protocols, pipes, and sockets count for one, and any duplicate file handles to files are eliminated. It still won’t be accurate, but it will give you a lower bound on how may many file handles are being used.

Leave a Comment

   Tagged with: , , , , , , , , , , , , , , , , , , ,