Linux: Find Out How Many File Descriptors Are Being Used

While administrating a box, you may wanted to find out what a processes is doing and find out how many file descriptors (fd) are being used. You will surprised to find out that process does open all sort of files:
=> Actual log file

=> /dev files

=> UNIX Sockets

=> Network sockets

=> Library files /lib /lib64

=> Executables and other programs etc

In this quick post, I will explain how to to count how many file descriptors are currently in use on your Linux server system.

Step # 1 Find Out PID

To find out PID for mysqld process, enter:
# ps aux | grep mysqld
# pidof mysqld


Step # 2 List File Opened By a PID # 28290

Use the lsof command or /proc/$PID/ file system to display open fds (file descriptors), run:
# lsof -p 28290
# lsof -a -p 28290

# cd /proc/28290/fd
# ls -l | less

You can count open file, enter:
# ls -l | wc -l

Tip: Count All Open File Handles

To count the number of open file handles of any sort, type the following command:
# lsof | wc -l
Sample outputs:


List File Descriptors in Kernel Memory

Type the following command:
# sysctl fs.file-nr
Sample outputs:

fs.file-nr = 1020	0	70000


  1. 1020 The number of allocated file handles.
  2. 0 The number of unused-but-allocated file handles.
  3. 70000 The system-wide maximum number of file handles.

You can use the following to find out or set the system-wide maximum number of file handles:
# sysctl fs.file-max
Sample outputs:

fs.file-max = 70000

See how to set the system-wide maximum number of file handles under Linux for more information.

More about /proc/PID/file & procfs File System

/proc (or procfs) is a pseudo-file system that it is dynamically generated after each reboot. It is used to access kernel information. procfs is also used by Solaris, BSD, AIX and other UNIX like operating systems. Now, you know how many file descriptors are being used by a process. You will find more interesting stuff in /proc/$PID/file directory:

  • /proc/PID/cmdline : process arguments
  • /proc/PID/cwd : process current working directory (symlink)
  • /proc/PID/exe : path to actual process executable file (symlink)
  • /proc/PID/environ : environment used by process
  • /proc/PID/root : the root path as seen by the process. For most processes this will be a link to / unless the process is running in a chroot jail.
  • /proc/PID/status : basic information about a process including its run state and memory usage.
  • /proc/PID/task : hard links to any tasks that have been started by this (the parent) process.

See also: /proc related FAQ/Tips

/proc is an essentials file system for sys-admin work. Just browser through our previous article to get more information about /proc file system:

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 15 comments so far... add one
CategoryList of Unix and Linux commands
Disk space analyzersncdu pydf
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
15 comments… add one
  • raj Aug 21, 2007 @ 20:35

    hey thanks for quick n dirty procfs tutorial 🙂

  • Bogdan Mar 28, 2008 @ 16:00

    thx, that was really useful!quick question: anybody has an idea why the value displayed by ‘lsof -p {procid} | wc -l’ is different to that of ‘ls -l /proc/{procid}/fd | wc -l’. lsof is usually higher…?

    • Matt Aug 25, 2011 @ 4:22

      Because lsof counts files in /proc//fd and adds the opend shared libs from /proc//maps

  • Jimi Feb 3, 2009 @ 19:04

    Thanks man, this was really useful.

  • Julien Feb 4, 2009 @ 22:12

    Thank you. I had a daemon that stopped functioning correctly six hours after I started it. Turns out, I had a file descriptor leak. It had never even crossed my mind until I put 2 and 2 together. This confirmed it. It’s all fixed now, thanks to you!

  • kunal Feb 18, 2009 @ 8:56

    Can’t we use lsof | wc -l for the same

  • apoc Feb 22, 2010 @ 8:26

    you can combine lsof with pidof:

    lsof -p `pidof ruby`

  • Matt Jun 7, 2010 @ 15:06

    I Have a files in /proc//fd which should be in a different pid fd folder! File Descriptor Leak! Closed files from another process are duplicating themselves (random number of duplicate content 5-75 times) into another process’s fd folder! Has anybody seen this before. Solaris 10.

  • SparK Aug 11, 2010 @ 7:52

    Yeah you can combine them all!
    below one gives all type of descriptor opened by “mysqld” process:

    lsof -p `pidof mysqld` |wc -l

    There is a way to set system level “max open file descriptors” and view current count of “open file descriptors” at system-level;
    I’m unable to recollect 🙁
    does anyone know it?

    • avajadi Aug 26, 2010 @ 14:51

      cat /proc/sys/fs/file-nr shows number of open files in the first column, max allowed open files in third column and 0 in the second column.

      It doesn’t update the first column on a single file handle basis, but seems to increase and decrease in steps

  • Adev Feb 16, 2013 @ 8:36

    I have a new unused server here where I’m trying to install/use nginx for php for the first time.

    Strange error for unused server?
    Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

    Anyhow, I followed the instructions and now I get this error:
    2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
    2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:, server: localhost, request: “GET /info.php HTTP/1.0”, upstream: “”, host: “”

    I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

    Any tips?

  • Vincent Dec 10, 2013 @ 13:43

    Just a quick remark: if it’s just for counting, like in

    lsof | wc -l

    then it is much faster not to resolve user ids and network adresses:

    lsof -n -l | wc -l

  • sorin Mar 6, 2014 @ 22:01

    I am quite interested about the subject but I am wondering which is the proper way to monitor these. I am considering adding a metric so I need a current value and a maximum one.

    Kinda strange but I do think that “lsof -n -l | wc -l” is measuring too much, I get ~145.000 on one server and ~10.000 on another one, and these are not under heavy load. After dumping the file I observer measuring TCP connections too, which I would prefer not to count them here.

  • Kael Apr 9, 2014 @ 16:33


    That’s because it is. Apparently, if a process opens a file handle, then forks, each of the processes will have the same file handle to the opened file, but it only counts as one file handle with respect to file handle limits, so lsof ends up double-counting it.

    Try this:
    lsof | grep / | sort -k9 -u | wc -l
    It will do the same thing, but sort on the name column, and discard duplicates. It means each of protocols, pipes, and sockets count for one, and any duplicate file handles to files are eliminated. It still won’t be accurate, but it will give you a lower bound on how may many file handles are being used.

  • DavetheMan Apr 28, 2016 @ 18:02

    Thanks so much! This page is so useful to find lsof for a particular process!

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum