Nginx: 24: Too Many Open Files Error And Solution

I‘m getting the following error in my nginx server error log file:

2010/04/16 13:24:16 [crit] 21974#0: *3188937 open() “/usr/local/nginx/html/50x.html” failed (24: Too many open files), client: 88.x.y.z, server:, request: “GET /file/images/background.jpg HTTP/1.1”, upstream: “”, host: “”

How do I fix this problem under CentOS / RHEL / Fedora Linux or UNIX like operating systems?

Linux / UNIX sets soft and hard limit for the number of file handles and open files. You can use ulimit command to view those limitations:
su - nginx
To see the hard and soft values, issue the command as follows:
ulimit -Hn
ulimit -Sn

Increase Open FD Limit at Linux OS Level

Your operating system set limits on how many files can be opened by nginx server. You can easily fix this problem by setting or increasing system open file limits under Linux. Edit file /etc/sysctl.conf, enter:
# vi /etc/sysctl.conf
Append / modify the following line:
fs.file-max = 70000
Save and close the file. Edit /etc/security/limits.conf, enter:
# vi /etc/security/limits.conf
Set soft and hard limit for all users or nginx user as follows:

nginx       soft    nofile   10000
nginx       hard    nofile  30000

Save and close the file. Finally, reload the changes with sysctl command:
# sysctl -p

nginx worker_rlimit_nofile Option (Increase Open FD Limit at Nginx Level)

Nginx also comes with worker_rlimit_nofile directive which allows to enlarge this limit if it’s not enough on fly at process level. To set the value for maximum file descriptors that can be opened by nginx process. Edit nginx.conf file, enter:
# vi /usr/local/nginx/conf/nginx.conf
Append / edit as follows:

# set open fd limit to 30000
worker_rlimit_nofile 30000;

Save and close the file. Reload nginx web server, enter:
# /usr/local/nginx/sbin/nginx -t && /usr/local/nginx/sbin/nginx -s reload
# su - nginx
$ ulimit -Hn
$ ulimit -Sn

Sample outputs:


See also:

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 15 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersncdu pydf
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
15 comments… add one
  • Bryan Pieper May 6, 2010 @ 13:18

    On ubuntu with pam.d, you also need to add:

    session required

    to the /etc/pam.d/common-session to allow the new limits to take effect. Otherwise, the default will remain 1024.

    • Solaris May 8, 2010 @ 22:37

      This is for ubuntu server ? Only if you use pam ?

  • Plain White May 6, 2010 @ 14:07

    In addition Ubuntu, make sure that the default pam configuration file (/etc/pam.d/system-auth for Red Hat Enterprise Linux, /etc/pam.d/common-session for SUSE Linux Enterprise Server) has the following entry too:
    session required

  • Pawel77 Jun 25, 2010 @ 16:04

    You might also want to increase rlimit_files for php-fpm if you use one!

    vi /usr/local/php-fpm/etc/php-fpm.conf

  • Ryan Pendergast Sep 19, 2011 @ 15:44

    None of this worked for me on ubuntu 10.10. What I had to do was modify /etc/default/nginx and put in ULIMIT=”-n 4096″.

    This is because limits.conf is only for PAM, and PAM does not apply to init.d scripts.

    See for more info.

    Note: if you run php-fpm, you’ll also want to look into:
    sed -i -e “s/;rlimit_files = .*$/rlimit_files = 4096/g” /etc/php5/fpm/pool.d/www.conf

  • kevin Nov 28, 2012 @ 6:30

    worker_rlimit_nofile in nginx is wrong.i want explain in detail.please…

  • pdflog Jan 1, 2013 @ 18:00

    # sysctl -p
    error: “Operation not permitted” setting key “fs.file-max”

    Please help

    • Sam Apr 23, 2015 @ 6:36

      @pdflog – You can’t change that value because you are on a shared hosting VPS. You need a fully virtualized VPS to change kernel values.

  • youreright Feb 16, 2013 @ 8:42

    I have a new unused server here where I’m trying to install/use nginx for php for the first time.

    Strange error for unused server?
    Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

    Anyhow, I followed the instructions and now I get this error:
    2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
    2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:, server: localhost, request: “GET /info.php HTTP/1.0″, upstream: “”, host: “″

    I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

    Any tips?

  • Vimal Jul 20, 2015 @ 8:34

    Hi Freinds,

    I Have a daticated server where more thank 5-10 websites running currently which can be static as serve static file like js , image , css etc … here are a number of static files as image on this server and as i found caching on the server of images ,and files , this server having more than 5000 Real time user at a time and nginx is install on the server. and already increase the limit of the fs.file-max = 70000 to 1,00000 than 1,50000 but again and again the error will come after some time or days. Here is error is come as ” [crit] 9159#0: ngx_slab_alloc() failed: no memory in cache keys zone “STATIC” ” and *17444213 open() “/var/www/html/cache/0/95/51ba297e64b7adb8bea664393bb11950” failed (23: Too many open files in system), client ….. nothing is done after increasing the size.

    What To Do Please Help
    Thanks Vimal Kumar

  • Joe Wicentowski Oct 28, 2015 @ 14:49

    Thanks very much! This article was very helpful. I posted my experience at The key steps that differed from the directions here were:

    1. Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx’s process IDs. Then query each process’s file handle limits using cat /proc/pid/limits.
    2. While the directions suggested that nginx -s reload was enough to get nginx to recognize the new settings, not all of nginx’s processes received the new setting. Upon closer inspection of /proc/pid/limits, the first worker process still had the original S1024/H4096 limit on file handles. Even nginx -s quit didn’t shut nginx down. The solution was to kill nginx with the kill pid. After restarting nginx, all of the nginx-user owned processes had the new file limit of S10000/H30000 handles.

  • ismail Dec 25, 2015 @ 22:19

    After updating file-max number to 4000000 I am not able to ssh server now.
    # vim /etc/sysctl.conf
    fs.file-max = 4000000
    I have tried ssh with -v option & what I get is only
    debug1: Exit status 254
    Note: This is the only change I have made.

    • 🐧 Vivek Gite Dec 25, 2015 @ 22:50

      I doubt it is related to fs.file-max settings. Login via console and check server logs.

  • eteo Jun 12, 2016 @ 6:44

    i perform all above change( set to 999999 ) but again i get error :

    open () /home/…. failed (24 too many open files .

    i check opened files with lsof | wc -l , it show 12000 .

    why ?

  • hman Aug 25, 2017 @ 23:28

    You blessed soul, thank you!!!

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum