Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

by on April 18, 2006 · 55 comments· LAST UPDATED May 6, 2010

in , ,

How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?

The ulimit command provides control over the resources available to the shell and/or to processes started by it, on systems that allow such control. The maximum number of open file descriptors displayed with following command (login as the root user).

Command To List Number Of Open File Descriptors

Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
Output:

75000

75000 files normal user can have open in single login session. To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn

To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
In this example, su to oracle user, enter:
# su - oracle
$ ulimit -Hn
$ ulimit -Sn

System-wide File Descriptors (FD) Limits

The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

The Number Of Maximum Files Was Reached, How Do I Fix This Problem?

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:
# vi /etc/sysctl.conf
Append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
OR
# sysctl fs.file-max

User Level FD Limits

The above procedure sets system-wide file descriptors (FD) limits. However, you can limit httpd (or any other users) user to specific limits by editing /etc/security/limits.conf file, enter:
# vi /etc/security/limits.conf
Set httpd user soft and hard limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240

Save and close the file. To see limits, enter:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 55 comments… read them below or add one }

1 Tachyon October 21, 2006 at 2:35 am

Icreasing the file handles is a good tip, but 5000 is very low these days. 200000 is more realistic for any modern system.
Also, there’s no need to logout, just edit the /etc/sysctl.conf and then type ‘sysctl -p’ as root.

Thanks,
Tachyon

Reply

2 Maroon Ibrahim June 23, 2007 at 12:25 pm

Does this command work for Debian and does it affect SQUID file descriptor too?
Best Regards?

Reply

3 nixCraft August 15, 2007 at 6:07 pm

Maroon,

Yes it works on Debian and all other Linux systems/distros.

Reply

4 Sathish A August 28, 2007 at 11:40 pm

how to increase in a Redhat linux server? How to find the location of sysctl.conf file or how to find in which file the limit has been set?

thanks in advance

Reply

5 baka.tom September 11, 2007 at 11:34 pm

i tried this on a CentOS (which by the way, i’ve decided the worst linux distribution ever), and it doesn’t seem to work. ulimit -n still says 1024, even after logout, even after reboot.

Reply

6 bourne September 13, 2007 at 2:10 pm

/etc/sysctl.conf is good for the system-wide amount, but don’t forget that users also need different limits. See /etc/security/limits.conf (Debian, Redhat, SuSE all have it, probably most others as well) to assign specific limits on per-group, per-user, and default basises.

Reply

7 jason September 13, 2007 at 8:55 pm

I am running “Red Hat Enterprise Linux ES release 4 (Nahant Update 5)” and followed the instructions above. Like “baka.tom”, I was unable to see the change reflected by typing “ulimit -n”. I don’t know if this is a problem, but it certainly reduces the credibility of this article (unless I screwed up, of course).

Reply

8 nixCraft September 14, 2007 at 4:32 am

baka.tom / jason,

The FAQ has been updated for latest kernel. It should work now. Let me know if you have any more problems.

bourne, thanks for pointing out user level or group level filelimit option.

I appreciate all feedback.

Reply

9 jackson December 1, 2007 at 1:54 am

Red Hat configuration requires the following line to be added for /etc/security/limits to work.

in /etc/pam.d/login
session required pam_limits.so

Reply

10 Arstan April 23, 2008 at 2:23 am

I’m trying to make 8192 on Ubuntu 7.10, adding

* soft nofile 8192
* hard nofile 8192

doesn’t work, but when i do change * to username(lets say root) it applies.

So how to change it system wide?

Reply

11 shankar June 19, 2009 at 3:58 pm

you could use the following command to check if the given change reflected

#ulimit -n -H

that gives the hard value…

Reply

12 anonymous July 4, 2009 at 1:33 pm

Funny, file-max has little to do with the number of open file descriptors :/

Reply

13 joseph bloe July 4, 2009 at 1:34 pm

Funny, this has little to do with the number of file descriptors….. it merely reflects the number of open files one may have.

open file != file descriptor :/

Reply

14 joseph bloe July 4, 2009 at 1:36 pm

unfortunately, open file != file descriptor. These are two distinct and separate things.

somehow only confusion has been added here.

Reply

15 hywl51 October 28, 2009 at 5:44 am

“Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
Output:

75000

75000 files normal user can have open in single login session. ”

I think 75000 should mean the whole system can support 75000 open files at most , not for per user login.

Reply

16 Adam HP February 25, 2010 at 9:18 am

To clear up any confusion for increasing the limit on Red Hat 5.X systems:

# echo “fs.file-max=70000″ >> /etc/sysctl.conf
# sysctl -p
# echo “* hard nofile 65536″ >> /etc/security/limits.conf
# echo “session required pam_limits.so” >> /etc/pam.d/login
# ulimit -n -H
65536

In summary set your max file descriptors to a number higher than your hard security ‘nofile’ limit to leave room for the OS to run.

Reply

17 Ramesh March 22, 2010 at 2:41 am

Can anyone explain all the attributes in ulimit -a and how it impacts the performance of a system?

Reply

18 Thamizhananl P April 29, 2010 at 9:11 am

In Red hat enterprises linux(RHEL4 and RHEL5) after setting nofile limit we need to do below modification
In /etc/security/limits.conf file added
root soft nofile 5000
root hard nofile 6000

Edit the /etc/pam.d/system-auth, and add this entry:
session required /lib/security/$ISA/pam_limits.so

It perfectly worked me.
After this change open a new terminal and issue ulimit -a.
There you could see the updated file descriptor value for root user.

Reply

19 Jeoffrey July 9, 2010 at 1:16 am

This is very good. Thanks for the post =)

Reply

20 jfd3 August 17, 2010 at 8:14 pm

What is the difference between a hard limit and a soft limit?
Thanks,

Reply

21 Francisco September 8, 2010 at 9:15 pm

Is there any equivalent for MAC OS x (darwin) ??

Reply

22 Dirk September 11, 2010 at 10:02 am

Hi Francisco,

when I issue “ulimit -n -H” on Mac OS X 10.6, it says “unlimited”. So I guess you don’t have to worry about it.

Dirk

Reply

23 JP June 14, 2013 at 4:37 pm

We actually found that to be not true or not something we think it is. We ran a test using Node.JS and there was a limit somewhere in the 240 range. Once we set the ulimit higher we were able to increase the connections.

Reply

24 Joel December 14, 2010 at 10:15 am

I’m using Linux (Debian Lenny) on a server. I would like to keep my ulimit -n settings.
The values in /etc/security/limits.conf (soft and hard limits) and in /etc/sysctl.conf have been increased.
/etc/pam.d/login constains the “session required pam_limits.so”
I’ve also put the “ulimit -n 50000″ command in .bashrc
… and after logout/login and/or ssh, ulimit -n still returns 1024! What tricky settings also need to be changed? These incoherent and over-complicated version-dependent settings really make linux unusable. I’d rather write code than waste my time on linux configuration files.

Reply

25 Joel December 14, 2010 at 10:18 am

In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

Reply

26 jee January 2, 2011 at 4:19 pm

I have a problem about “too many open file “, i had changed all parameters,

but this problem is exist.

My system have a web application system, that have a dongle, I guess the problem

maybe caused by dongle .

Can you help me , thank you !

Reply

27 Terry Antonio January 25, 2011 at 12:22 am

Good Day Mate
I was going to leave you in my will for this but the mortgage payments might be to high.
Anyway long story short I run free radius and on a new server i built and it constantly reports “no db handles” if you check with the freeradius forum this is a common problem and is usually met with the kurt reply “figure out whats using all the descriptors”.
I had in my mind I had encountered this problem many years ago and it was a matter of increasing the system handles but just could not track it down.
From your article I found my mysql handles were set at 1024 I increased the soft limit to 5000 and the hl to 10000 and all is well in paradise again. Your bloods worth bottling
Cheers Terry

Reply

28 mahuja March 21, 2011 at 10:29 am

>What is the difference between a hard limit and a soft limit?

A user can decrease his own hard limit. Only root can increase his own hard limit.
A user can decrease his soft limit, or increase it up to the hard limit; this is the effective limit.

Say the default is Hard 1000 and soft 500 – that means you can only open 500 files unless you explicitly ask for more by increasing it to 1000. But you can’t get 1001 without root.

>In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

The one piece of information you’re missing:

New security settings, also including group memberships, only apply for sessions started after the change was done. Except commands that affect the current session only. A reboot will always clear out all the pre-change sessions that are still running.

Reply

29 Marco Smith March 23, 2011 at 8:13 am

Hi, Can someone please tell me why the amount of open files increase so rapidly when changing from Single user mode to multi user mode?

Reply

30 nixCraft March 23, 2011 at 9:27 am

More users + More process + More background process == More open files

Reply

31 Marco Smith March 23, 2011 at 9:32 am

And what if you run, multi user mode from console, i.e without Windows X? Like for instance, what process/processes run that makes the number on increase SO high….like from 160 to 900 open files????

Reply

32 Thamizhannal March 23, 2011 at 10:01 am

Excluding operating system process(es), if any application runs on muti user mode and does not close the opened file, then it would create this issue. Please make sure that if any external application(may be your own’s or deployed one) runs on muti user mode and closed all the files it has opened if any.

Reply

33 cgrinds April 25, 2011 at 9:35 pm

If you have cPanel, check /etc/profile.d/limits.sh. Although you shouldn’t change it here it’s possibly the root cause of your changes not sticking.

Reply

34 pep May 12, 2011 at 10:51 am

Could you please let me know , while we change the ulimit for root using “ulimit -n” , which is the configuration file in which it reflects.

Reply

35 Mukesh September 6, 2011 at 7:44 am

Thanks, it worked for me on Red Hat Linux 5.

Reply

36 khupcom September 21, 2011 at 3:19 am

Not working on my centos box.
# su – nginx
# ulimit -Hn
1024
# ulimit -Sn
1024

Reply

37 Soham December 8, 2011 at 10:30 am

Try in /etc/security/limits.conf file and at the end of the file add this.

* – nofile

Logout and login again and check ulimit -n

Reply

38 Leonardo September 27, 2011 at 5:10 pm

Hi,

Worked in RH 5.

It´s very important to know if you have set the max-files for user xx, you must to start the application with the user xx…. if you are starting with another user, the changes does not effetct.

Reply

39 Nick October 26, 2011 at 1:05 am

You could also try

ulimit -n xxxxx now

where xxxxx is the number, e.g. 16384. Worked for me on CentOS 5.7.

Reply

40 Killjoy November 23, 2011 at 7:13 pm

Nice one, worked fine for me too on Debian 6.

Reply

41 Colin August 10, 2012 at 7:22 pm

That did it! This should be added to the main article, as just editing the limits.conf did not take immediate effect for me on Debian 6.

Reply

42 kamalakar November 1, 2011 at 12:55 pm

I am trying to increase ulimit on Ubuntu
even after restarting changes are not reflected

Reply

43 mrcool December 26, 2011 at 7:40 am

u must edit config file ,if u increase the file descriptor value through the terminal means it shold be a temporary only ,if u want to permanently increase your ulimit mean u have to set the ulimit in config file. . . .

Reply

44 Baysie May 1, 2012 at 8:20 am

Modifying the /etc/security/limits.conf file didn’t seem to work for me at first – but then I realised that I needed to specify the domain for the user(s) as our systems use Active Directory authentication.

For anyone else who uses Active Directory authentication, you should use something like:

@DOMAIN+username          soft      nofile         10000
@DOMAIN+username          hard    nofile         10000

Reply

45 nixCraft May 1, 2012 at 9:28 am

Interesting. I never used AD based auth.

Appreciate your comment.

Reply

46 pau August 23, 2012 at 8:10 am

Even if you already have this in /etc/pam.d/login, you may also need to add the following to /etc/pam.d/common-session:

session required pam_limits.so

Reply

47 Rahul.Patil August 29, 2012 at 9:03 am

Hi,
how many maximum number of process(nproc) user can run in linux ?

Reply

48 Derek Shaw October 17, 2012 at 5:35 am

using ubuntu 64-bit, 10.04 desktop
this fix does not work for the root user using the “wildcard” format.
it does work for all other users.

The complete solution for this config (as pointed out by Arstan, on April 23, 2008) is as follows
(copied from my /etc/security/limits.conf)
#added for samba testparm error
# rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)
* hard nofile 32000
* soft nofile 16384
root hard nofile 32000
root soft nofile 16384

I also edited /etc/pam.d/common-session, but this had no effect on the root user.

I haven’t bothered to find out if smbd is run by another user (that is, whether the wildcard entries are really needed). In any event, the warning is quite misleading. According to a bug report/discussion on the samba.org website (from none other than Jeremy Allison), the message should really be along the lines that “Samba has increased your open file descriptors to meet the requirements of windows” http://lists.samba.org/archive/samba/2010-January/153320.html
https://bugzilla.samba.org/show_bug.cgi?id=7898

Cheers!
d.

Reply

49 subhankar sengupta October 27, 2012 at 10:02 pm

Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so

and then do
#ulimit -n unlimited

Note that you may need to log out and back in again before the changes take effect

Reply

50 Donny April 29, 2013 at 8:05 pm

file descriptors soft limits in /etc/secutiy/limits.conf doesnt work with csh
# Maximum open files

mqm hard nofile 10240
mqm soft nofile 10240

I had to add the below line in.cshrc file.
limit descriptors 10240

Reply

51 LarrH September 6, 2013 at 1:01 am

It should be noted… that, if you set limits in /etc/security/limits.conf for a specific user… that user MAY have to logoff and then back on in order to ‘see’ the changes using the ‘ulimit -Hn’ or ‘ulimit -Sn’ command.

Running processes, for the particular user, may have to be stopped and restarted as well.

Reply

52 Garrett N September 6, 2013 at 2:45 pm

This article was very helpful. Thank you for taking the time to write it.
Cheers!!

Reply

53 dragun0v April 27, 2014 at 4:03 am

I have been running into an issue “too many open files”. I know that I can modify the max open files for that process/user but I have been trying to figure out a way to identify the files being opened in real time meaning if a process is opening files – just list the new files being opened.

To do that I attach strace to that process and observe what new files are being opened. I noticed that that the number of open files are increasing but strace isn’t reporting any open/read/etc system calls. Strace is just stuck at futex(0x2ba8050739d0, FUTEX_WAIT, 16798, NULL. Can you suggest a way to figure out files being opened by the process in real time?

Also, is this the only way to identify the reason for why a process/user is opening files/sockets?

Reply

54 Jayhoonova July 17, 2014 at 2:18 pm

Limit is 65536 but around 8000 open file.. I got too many open file error….
:( Please help
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
fs.file-max = 65536

Reply

55 Steven August 5, 2014 at 11:29 am

Note that on RHEL/CentOS 7 the limits-conf file is no longer used by applications with a SystemD startup script, for example MySQL will give you arrors like this regardless of the limits you set in sysctl.conf or limits.conf:

2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)
2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

You now have to directly edit the service startup script, for example /usr/lib/systemd/system/mysqld.service

and add LimitNOFILE=60000 to the [Service] section. I lost 2h with that.

Reply

Leave a Comment

Tagged as: , , , , , , , , , , , , , , , , , , , ,

Previous Faq:

Next Faq: