Linux: Find Out How Many File Descriptors Are Being Used

by on August 21, 2007 · 14 comments· LAST UPDATED July 1, 2011

in , ,

While administrating a box, you may wanted to find out what a processes is doing and find out how many file descriptors (fd) are being used. You will surprised to find out that process does open all sort of files:
=> Actual log file

=> /dev files

=> UNIX Sockets

=> Network sockets

=> Library files /lib /lib64

=> Executables and other programs etc

In this quick post, I will explain how to to count how many file descriptors are currently in use on your Linux server system.

Step # 1 Find Out PID

To find out PID for mysqld process, enter:
# ps aux | grep mysqld
# pidof mysqld


Step # 2 List File Opened By a PID # 28290

Use the lsof command or /proc/$PID/ file system to display open fds (file descriptors), run:
# lsof -p 28290
# lsof -a -p 28290

# cd /proc/28290/fd
# ls -l | less

You can count open file, enter:
# ls -l | wc -l

Tip: Count All Open File Handles

To count the number of open file handles of any sort, type the following command:
# lsof | wc -l
Sample outputs:


List File Descriptors in Kernel Memory

Type the following command:
# sysctl fs.file-nr
Sample outputs:

fs.file-nr = 1020	0	70000


  1. 1020 The number of allocated file handles.
  2. 0 The number of unused-but-allocated file handles.
  3. 70000 The system-wide maximum number of file handles.

You can use the following to find out or set the system-wide maximum number of file handles:
# sysctl fs.file-max
Sample outputs:

fs.file-max = 70000

See how to set the system-wide maximum number of file handles under Linux for more information.

More about /proc/PID/file & procfs File System

/proc (or procfs) is a pseudo-file system that it is dynamically generated after each reboot. It is used to access kernel information. procfs is also used by Solaris, BSD, AIX and other UNIX like operating systems. Now, you know how many file descriptors are being used by a process. You will find more interesting stuff in /proc/$PID/file directory:

  • /proc/PID/cmdline : process arguments
  • /proc/PID/cwd : process current working directory (symlink)
  • /proc/PID/exe : path to actual process executable file (symlink)
  • /proc/PID/environ : environment used by process
  • /proc/PID/root : the root path as seen by the process. For most processes this will be a link to / unless the process is running in a chroot jail.
  • /proc/PID/status : basic information about a process including its run state and memory usage.
  • /proc/PID/task : hard links to any tasks that have been started by this (the parent) process.

See also: /proc related FAQ/Tips

/proc is an essentials file system for sys-admin work. Just browser through our previous article to get more information about /proc file system:

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 14 comments… read them below or add one }

1 raj August 21, 2007 at 8:35 pm

hey thanks for quick n dirty procfs tutorial :)


2 Bogdan March 28, 2008 at 4:00 pm

thx, that was really useful!quick question: anybody has an idea why the value displayed by ‘lsof -p {procid} | wc -l’ is different to that of ‘ls -l /proc/{procid}/fd | wc -l’. lsof is usually higher…?


3 Matt August 25, 2011 at 4:22 am

Because lsof counts files in /proc//fd and adds the opend shared libs from /proc//maps


4 Jimi February 3, 2009 at 7:04 pm

Thanks man, this was really useful.


5 Julien February 4, 2009 at 10:12 pm

Thank you. I had a daemon that stopped functioning correctly six hours after I started it. Turns out, I had a file descriptor leak. It had never even crossed my mind until I put 2 and 2 together. This confirmed it. It’s all fixed now, thanks to you!


6 kunal February 18, 2009 at 8:56 am

Can’t we use lsof | wc -l for the same


7 apoc February 22, 2010 at 8:26 am

you can combine lsof with pidof:

lsof -p `pidof ruby`


8 Matt June 7, 2010 at 3:06 pm

I Have a files in /proc//fd which should be in a different pid fd folder! File Descriptor Leak! Closed files from another process are duplicating themselves (random number of duplicate content 5-75 times) into another process’s fd folder! Has anybody seen this before. Solaris 10.


9 SparK August 11, 2010 at 7:52 am

Yeah you can combine them all!
below one gives all type of descriptor opened by “mysqld” process:

lsof -p `pidof mysqld` |wc -l

There is a way to set system level “max open file descriptors” and view current count of “open file descriptors” at system-level;
I’m unable to recollect :(
does anyone know it?


10 avajadi August 26, 2010 at 2:51 pm

cat /proc/sys/fs/file-nr shows number of open files in the first column, max allowed open files in third column and 0 in the second column.

It doesn’t update the first column on a single file handle basis, but seems to increase and decrease in steps


11 Adev February 16, 2013 at 8:36 am

I have a new unused server here where I’m trying to install/use nginx for php for the first time.

Strange error for unused server?
Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

Anyhow, I followed the instructions and now I get this error:
2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:, server: localhost, request: “GET /info.php HTTP/1.0″, upstream: “”, host: “”

I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

Any tips?


12 Vincent December 10, 2013 at 1:43 pm

Just a quick remark: if it’s just for counting, like in

lsof | wc -l

then it is much faster not to resolve user ids and network adresses:

lsof -n -l | wc -l


13 sorin March 6, 2014 at 10:01 pm

I am quite interested about the subject but I am wondering which is the proper way to monitor these. I am considering adding a metric so I need a current value and a maximum one.

Kinda strange but I do think that “lsof -n -l | wc -l” is measuring too much, I get ~145.000 on one server and ~10.000 on another one, and these are not under heavy load. After dumping the file I observer measuring TCP connections too, which I would prefer not to count them here.


14 Kael April 9, 2014 at 4:33 pm


That’s because it is. Apparently, if a process opens a file handle, then forks, each of the processes will have the same file handle to the opened file, but it only counts as one file handle with respect to file handle limits, so lsof ends up double-counting it.

Try this:
lsof | grep / | sort -k9 -u | wc -l
It will do the same thing, but sort on the name column, and discard duplicates. It means each of protocols, pipes, and sockets count for one, and any duplicate file handles to files are eliminated. It still won’t be accurate, but it will give you a lower bound on how may many file handles are being used.


Leave a Comment

Tagged as: , , , , , , , , , , , , , , , , , , ,

Previous post:

Next post: