≡ Menu

Nginx: 24: Too Many Open Files Error And Solution

I‘m getting the following error in my nginx server error log file:

2010/04/16 13:24:16 [crit] 21974#0: *3188937 open() “/usr/local/nginx/html/50x.html” failed (24: Too many open files), client: 88.x.y.z, server: example.com, request: “GET /file/images/background.jpg HTTP/1.1”, upstream: “”, host: “example.com”

How do I fix this problem under CentOS / RHEL / Fedora Linux or UNIX like operating systems?

Linux / UNIX sets soft and hard limit for the number of file handles and open files. You can use ulimit command to view those limitations:
su - nginx
To see the hard and soft values, issue the command as follows:
ulimit -Hn
ulimit -Sn

Increase Open FD Limit at Linux OS Level

Your operating system set limits on how many files can be opened by nginx server. You can easily fix this problem by setting or increasing system open file limits under Linux. Edit file /etc/sysctl.conf, enter:
# vi /etc/sysctl.conf
Append / modify the following line:
fs.file-max = 70000
Save and close the file. Edit /etc/security/limits.conf, enter:
# vi /etc/security/limits.conf
Set soft and hard limit for all users or nginx user as follows:

nginx       soft    nofile   10000
nginx       hard    nofile  30000

Save and close the file. Finally, reload the changes with sysctl command:
# sysctl -p

nginx worker_rlimit_nofile Option (Increase Open FD Limit at Nginx Level)

Nginx also comes with worker_rlimit_nofile directive which allows to enlarge this limit if it’s not enough on fly at process level. To set the value for maximum file descriptors that can be opened by nginx process. Edit nginx.conf file, enter:
# vi /usr/local/nginx/conf/nginx.conf
Append / edit as follows:

# set open fd limit to 30000
worker_rlimit_nofile 30000;

Save and close the file. Reload nginx web server, enter:
# /usr/local/nginx/sbin/nginx -t && /usr/local/nginx/sbin/nginx -s reload
# su - nginx
$ ulimit -Hn
$ ulimit -Sn

Sample outputs:


See also:

Share this tutorial on:

Like this? Follow us on Twitter OR support us by using Patreon

{ 14 comments… add one }
  • Bryan Pieper May 6, 2010, 1:18 pm

    On ubuntu with pam.d, you also need to add:

    session required pam_limits.so

    to the /etc/pam.d/common-session to allow the new limits to take effect. Otherwise, the default will remain 1024.

    • Solaris May 8, 2010, 10:37 pm

      This is for ubuntu server ? Only if you use pam ?

  • Plain White May 6, 2010, 2:07 pm

    In addition Ubuntu, make sure that the default pam configuration file (/etc/pam.d/system-auth for Red Hat Enterprise Linux, /etc/pam.d/common-session for SUSE Linux Enterprise Server) has the following entry too:
    session required pam_limits.so

  • Pawel77 June 25, 2010, 4:04 pm

    You might also want to increase rlimit_files for php-fpm if you use one!

    vi /usr/local/php-fpm/etc/php-fpm.conf

  • Ryan Pendergast September 19, 2011, 3:44 pm

    None of this worked for me on ubuntu 10.10. What I had to do was modify /etc/default/nginx and put in ULIMIT=”-n 4096″.

    This is because limits.conf is only for PAM, and PAM does not apply to init.d scripts.

    See http://ubuntuforums.org/showthread.php?t=824966 for more info.

    Note: if you run php-fpm, you’ll also want to look into:
    sed -i -e “s/;rlimit_files = .*$/rlimit_files = 4096/g” /etc/php5/fpm/pool.d/www.conf

  • kevin November 28, 2012, 6:30 am

    worker_rlimit_nofile in nginx is wrong.i want explain in detail.please…

  • pdflog January 1, 2013, 6:00 pm

    # sysctl -p
    error: “Operation not permitted” setting key “fs.file-max”

    Please help

    • Sam April 23, 2015, 6:36 am

      @pdflog – You can’t change that value because you are on a shared hosting VPS. You need a fully virtualized VPS to change kernel values.

  • youreright February 16, 2013, 8:42 am

    I have a new unused server here where I’m trying to install/use nginx for php for the first time.

    Strange error for unused server?
    Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

    Anyhow, I followed the instructions and now I get this error:
    2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
    2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:, server: localhost, request: “GET /info.php HTTP/1.0″, upstream: “”, host: “″

    I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

    Any tips?

  • Vimal July 20, 2015, 8:34 am

    Hi Freinds,

    I Have a daticated server where more thank 5-10 websites running currently which can be static as serve static file like js , image , css etc … here are a number of static files as image on this server and as i found caching on the server of images ,and files , this server having more than 5000 Real time user at a time and nginx is install on the server. and already increase the limit of the fs.file-max = 70000 to 1,00000 than 1,50000 but again and again the error will come after some time or days. Here is error is come as ” [crit] 9159#0: ngx_slab_alloc() failed: no memory in cache keys zone “STATIC” ” and *17444213 open() “/var/www/html/cache/0/95/51ba297e64b7adb8bea664393bb11950” failed (23: Too many open files in system), client ….. nothing is done after increasing the size.

    What To Do Please Help
    Thanks Vimal Kumar

  • Joe Wicentowski October 28, 2015, 2:49 pm

    Thanks very much! This article was very helpful. I posted my experience at https://gist.github.com/joewiz/4c39c9d061cf608cb62b. The key steps that differed from the directions here were:

    1. Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx’s process IDs. Then query each process’s file handle limits using cat /proc/pid/limits.
    2. While the directions suggested that nginx -s reload was enough to get nginx to recognize the new settings, not all of nginx’s processes received the new setting. Upon closer inspection of /proc/pid/limits, the first worker process still had the original S1024/H4096 limit on file handles. Even nginx -s quit didn’t shut nginx down. The solution was to kill nginx with the kill pid. After restarting nginx, all of the nginx-user owned processes had the new file limit of S10000/H30000 handles.

  • ismail December 25, 2015, 10:19 pm

    After updating file-max number to 4000000 I am not able to ssh server now.
    # vim /etc/sysctl.conf
    fs.file-max = 4000000
    I have tried ssh with -v option & what I get is only
    debug1: Exit status 254
    Note: This is the only change I have made.

    • Vivek Gite December 25, 2015, 10:50 pm

      I doubt it is related to fs.file-max settings. Login via console and check server logs.

  • eteo June 12, 2016, 6:44 am

    i perform all above change( set to 999999 ) but again i get error :

    open () /home/…. failed (24 too many open files .

    i check opened files with lsof | wc -l , it show 12000 .

    why ?

Security: Are you a robot or human?

Leave a Comment

You can use these HTML tags and attributes: <strong> <em> <pre> <code> <a href="" title="">

   Tagged with: , , , , , , , , , ,