Nginx: 24: Too Many Open Files Error And Solution

by on May 6, 2010 · 8 comments· LAST UPDATED May 6, 2010

in

I'm getting the following error in my nginx server error log file:

2010/04/16 13:24:16 [crit] 21974#0: *3188937 open() "/usr/local/nginx/html/50x.html" failed (24: Too many open files), client: 88.x.y.z, server: example.com, request: "GET /file/images/background.jpg HTTP/1.1", upstream: "http://10.8.4.227:81//file/images/background.jpg", host: "example.com"

How do I fix this problem under CentOS / RHEL / Fedora Linux or UNIX like operating systems?

Linux / UNIX sets soft and hard limit for the number of file handles and open files. You can use ulimit command to view those limitations:
su - nginx
To see the hard and soft values, issue the command as follows:
ulimit -Hn
ulimit -Sn

Increase Open FD Limit at Linux OS Level

Your operating system set limits on how many files can be opened by nginx server. You can easily fix this problem by setting or increasing system open file limits under Linux. Edit file /etc/sysctl.conf, enter:
# vi /etc/sysctl.conf
Append / modify the following line:
fs.file-max = 70000
Save and close the file. Edit /etc/security/limits.conf, enter:
# vi /etc/security/limits.conf
Set soft and hard limit for all users or nginx user as follows:

nginx       soft    nofile   10000
nginx       hard    nofile  30000

Save and close the file. Finally, reload the changes with sysctl command:
# sysctl -p

nginx worker_rlimit_nofile Option (Increase Open FD Limit at Nginx Level)

Nginx also comes with worker_rlimit_nofile directive which allows to enlarge this limit if it's not enough on fly at process level. To set the value for maximum file descriptors that can be opened by nginx process. Edit nginx.conf file, enter:
# vi /usr/local/nginx/conf/nginx.conf
Append / edit as follows:

# set open fd limit to 30000
worker_rlimit_nofile 30000;

Save and close the file. Reload nginx web server, enter:
# /usr/local/nginx/sbin/nginx -t && /usr/local/nginx/sbin/nginx -s reload
# su - nginx
$ ulimit -Hn
$ ulimit -Sn

Sample outputs:

30000
10000

See also:

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 8 comments… read them below or add one }

1 Bryan Pieper May 6, 2010 at 1:18 pm

On ubuntu with pam.d, you also need to add:

session required pam_limits.so

to the /etc/pam.d/common-session to allow the new limits to take effect. Otherwise, the default will remain 1024.

Reply

2 Solaris May 8, 2010 at 10:37 pm

This is for ubuntu server ? Only if you use pam ?

Reply

3 Plain White May 6, 2010 at 2:07 pm

In addition Ubuntu, make sure that the default pam configuration file (/etc/pam.d/system-auth for Red Hat Enterprise Linux, /etc/pam.d/common-session for SUSE Linux Enterprise Server) has the following entry too:
session required pam_limits.so

Reply

4 Pawel77 June 25, 2010 at 4:04 pm

You might also want to increase rlimit_files for php-fpm if you use one!

vi /usr/local/php-fpm/etc/php-fpm.conf
30000

Reply

5 Ryan Pendergast September 19, 2011 at 3:44 pm

None of this worked for me on ubuntu 10.10. What I had to do was modify /etc/default/nginx and put in ULIMIT=”-n 4096″.

This is because limits.conf is only for PAM, and PAM does not apply to init.d scripts.

See http://ubuntuforums.org/showthread.php?t=824966 for more info.

Note: if you run php-fpm, you’ll also want to look into:
sed -i -e “s/;rlimit_files = .*$/rlimit_files = 4096/g” /etc/php5/fpm/pool.d/www.conf

Reply

6 kevin November 28, 2012 at 6:30 am

worker_rlimit_nofile in nginx is wrong.i want explain in detail.please…

Reply

7 pdflog January 1, 2013 at 6:00 pm

# sysctl -p
error: “Operation not permitted” setting key “fs.file-max”

Please help

Reply

8 youreright February 16, 2013 at 8:42 am

I have a new unused server here where I’m trying to install/use nginx for php for the first time.

Strange error for unused server?
==
Firstly, it seems strange to me that I would get “Too many open files” for a new unused server. ulimit -Hn/Sn showed 4096/1024 which seemed adequate whie nginx was using only 9/10 acccording to: ls -l /proc//fd | wc -l

Anyhow, I followed the instructions and now I get this error:
==
2013/02/15 16:30:39 [alert] 4785#0: 1024 worker_connections are not enough
2013/02/15 16:30:39 [error] 4785#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: localhost, request: “GET /info.php HTTP/1.0″, upstream: “http://127.0.0.1:80/info.php”, host: “127.0.0.1″

Tried:
==
I’ve tried increasing the worker_connections to large numbers e.g. 19999 to no avail.

Any tips?

Reply

Leave a Comment

Tagged as: , , , , , , , , , ,

Previous Faq:

Next Faq: