≡ Menu

Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?

The ulimit command provides control over the resources available to the shell and/or to processes started by it, on systems that allow such control. The maximum number of open file descriptors displayed with following command (login as the root user).

Command To List Number Of Open File Descriptors

Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
Output:

75000

75000 files normal user can have open in single login session. To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn

To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
In this example, su to oracle user, enter:
# su - oracle
$ ulimit -Hn
$ ulimit -Sn

System-wide File Descriptors (FD) Limits

The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

The Number Of Maximum Files Was Reached, How Do I Fix This Problem?

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:
# vi /etc/sysctl.conf
Append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
OR
# sysctl fs.file-max

User Level FD Limits

The above procedure sets system-wide file descriptors (FD) limits. However, you can limit httpd (or any other users) user to specific limits by editing /etc/security/limits.conf file, enter:
# vi /etc/security/limits.conf
Set httpd user soft and hard limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240

Save and close the file. To see limits, enter:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn

Tweet itFacebook itGoogle+ itPDF itFound an error/typo on this page?

{ 61 comments… add one }

  • Tachyon October 21, 2006, 2:35 am

    Icreasing the file handles is a good tip, but 5000 is very low these days. 200000 is more realistic for any modern system.
    Also, there’s no need to logout, just edit the /etc/sysctl.conf and then type ‘sysctl -p’ as root.

    Thanks,
    Tachyon

  • Maroon Ibrahim June 23, 2007, 12:25 pm

    Does this command work for Debian and does it affect SQUID file descriptor too?
    Best Regards?

  • nixCraft August 15, 2007, 6:07 pm

    Maroon,

    Yes it works on Debian and all other Linux systems/distros.

  • Sathish A August 28, 2007, 11:40 pm

    how to increase in a Redhat linux server? How to find the location of sysctl.conf file or how to find in which file the limit has been set?

    thanks in advance

  • baka.tom September 11, 2007, 11:34 pm

    i tried this on a CentOS (which by the way, i’ve decided the worst linux distribution ever), and it doesn’t seem to work. ulimit -n still says 1024, even after logout, even after reboot.

  • bourne September 13, 2007, 2:10 pm

    /etc/sysctl.conf is good for the system-wide amount, but don’t forget that users also need different limits. See /etc/security/limits.conf (Debian, Redhat, SuSE all have it, probably most others as well) to assign specific limits on per-group, per-user, and default basises.

  • jason September 13, 2007, 8:55 pm

    I am running “Red Hat Enterprise Linux ES release 4 (Nahant Update 5)” and followed the instructions above. Like “baka.tom”, I was unable to see the change reflected by typing “ulimit -n”. I don’t know if this is a problem, but it certainly reduces the credibility of this article (unless I screwed up, of course).

  • nixCraft September 14, 2007, 4:32 am

    baka.tom / jason,

    The FAQ has been updated for latest kernel. It should work now. Let me know if you have any more problems.

    bourne, thanks for pointing out user level or group level filelimit option.

    I appreciate all feedback.

  • jackson December 1, 2007, 1:54 am

    Red Hat configuration requires the following line to be added for /etc/security/limits to work.

    in /etc/pam.d/login
    session required pam_limits.so

  • Arstan April 23, 2008, 2:23 am

    I’m trying to make 8192 on Ubuntu 7.10, adding

    * soft nofile 8192
    * hard nofile 8192

    doesn’t work, but when i do change * to username(lets say root) it applies.

    So how to change it system wide?

    • Tozz August 28, 2014, 11:21 am

      In Debian (and thus Ubuntu) the wildcard does _not_ work for root, only for regular users. If you want to change the limits for root you have to create additional lines specifically for root.

  • shankar June 19, 2009, 3:58 pm

    you could use the following command to check if the given change reflected

    #ulimit -n -H

    that gives the hard value…

  • anonymous July 4, 2009, 1:33 pm

    Funny, file-max has little to do with the number of open file descriptors :/

  • joseph bloe July 4, 2009, 1:34 pm

    Funny, this has little to do with the number of file descriptors….. it merely reflects the number of open files one may have.

    open file != file descriptor :/

  • joseph bloe July 4, 2009, 1:36 pm

    unfortunately, open file != file descriptor. These are two distinct and separate things.

    somehow only confusion has been added here.

  • hywl51 October 28, 2009, 5:44 am

    “Use the following command command to display maximum number of open file descriptors:
    cat /proc/sys/fs/file-max
    Output:

    75000

    75000 files normal user can have open in single login session. ”

    I think 75000 should mean the whole system can support 75000 open files at most , not for per user login.

  • Adam HP February 25, 2010, 9:18 am

    To clear up any confusion for increasing the limit on Red Hat 5.X systems:

    # echo “fs.file-max=70000” >> /etc/sysctl.conf
    # sysctl -p
    # echo “* hard nofile 65536” >> /etc/security/limits.conf
    # echo “session required pam_limits.so” >> /etc/pam.d/login
    # ulimit -n -H
    65536

    In summary set your max file descriptors to a number higher than your hard security ‘nofile’ limit to leave room for the OS to run.

  • Ramesh March 22, 2010, 2:41 am

    Can anyone explain all the attributes in ulimit -a and how it impacts the performance of a system?

  • Thamizhananl P April 29, 2010, 9:11 am

    In Red hat enterprises linux(RHEL4 and RHEL5) after setting nofile limit we need to do below modification
    In /etc/security/limits.conf file added
    root soft nofile 5000
    root hard nofile 6000

    Edit the /etc/pam.d/system-auth, and add this entry:
    session required /lib/security/$ISA/pam_limits.so

    It perfectly worked me.
    After this change open a new terminal and issue ulimit -a.
    There you could see the updated file descriptor value for root user.

  • Jeoffrey July 9, 2010, 1:16 am

    This is very good. Thanks for the post =)

  • jfd3 August 17, 2010, 8:14 pm

    What is the difference between a hard limit and a soft limit?
    Thanks,

    • Broshep McDaggles March 11, 2015, 4:41 pm

      A hard limit cannot be increased by a non-root user once it is set; a soft limit may be increased up to the value of the hard limit. (Its buried deep in the man pages)

  • Francisco September 8, 2010, 9:15 pm

    Is there any equivalent for MAC OS x (darwin) ??

    • Dirk September 11, 2010, 10:02 am

      Hi Francisco,

      when I issue “ulimit -n -H” on Mac OS X 10.6, it says “unlimited”. So I guess you don’t have to worry about it.

      Dirk

      • JP June 14, 2013, 4:37 pm

        We actually found that to be not true or not something we think it is. We ran a test using Node.JS and there was a limit somewhere in the 240 range. Once we set the ulimit higher we were able to increase the connections.

  • Joel December 14, 2010, 10:15 am

    I’m using Linux (Debian Lenny) on a server. I would like to keep my ulimit -n settings.
    The values in /etc/security/limits.conf (soft and hard limits) and in /etc/sysctl.conf have been increased.
    /etc/pam.d/login constains the “session required pam_limits.so”
    I’ve also put the “ulimit -n 50000” command in .bashrc
    … and after logout/login and/or ssh, ulimit -n still returns 1024! What tricky settings also need to be changed? These incoherent and over-complicated version-dependent settings really make linux unusable. I’d rather write code than waste my time on linux configuration files.

  • Joel December 14, 2010, 10:18 am

    In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

  • jee January 2, 2011, 4:19 pm

    I have a problem about “too many open file “, i had changed all parameters,

    but this problem is exist.

    My system have a web application system, that have a dongle, I guess the problem

    maybe caused by dongle .

    Can you help me , thank you !

  • Terry Antonio January 25, 2011, 12:22 am

    Good Day Mate
    I was going to leave you in my will for this but the mortgage payments might be to high.
    Anyway long story short I run free radius and on a new server i built and it constantly reports “no db handles” if you check with the freeradius forum this is a common problem and is usually met with the kurt reply “figure out whats using all the descriptors”.
    I had in my mind I had encountered this problem many years ago and it was a matter of increasing the system handles but just could not track it down.
    From your article I found my mysql handles were set at 1024 I increased the soft limit to 5000 and the hl to 10000 and all is well in paradise again. Your bloods worth bottling
    Cheers Terry

  • mahuja March 21, 2011, 10:29 am

    >What is the difference between a hard limit and a soft limit?

    A user can decrease his own hard limit. Only root can increase his own hard limit.
    A user can decrease his soft limit, or increase it up to the hard limit; this is the effective limit.

    Say the default is Hard 1000 and soft 500 – that means you can only open 500 files unless you explicitly ask for more by increasing it to 1000. But you can’t get 1001 without root.

    >In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

    The one piece of information you’re missing:

    New security settings, also including group memberships, only apply for sessions started after the change was done. Except commands that affect the current session only. A reboot will always clear out all the pre-change sessions that are still running.

  • Marco Smith March 23, 2011, 8:13 am

    Hi, Can someone please tell me why the amount of open files increase so rapidly when changing from Single user mode to multi user mode?

    • nixCraft March 23, 2011, 9:27 am

      More users + More process + More background process == More open files

      • Marco Smith March 23, 2011, 9:32 am

        And what if you run, multi user mode from console, i.e without Windows X? Like for instance, what process/processes run that makes the number on increase SO high….like from 160 to 900 open files????

        • Thamizhannal March 23, 2011, 10:01 am

          Excluding operating system process(es), if any application runs on muti user mode and does not close the opened file, then it would create this issue. Please make sure that if any external application(may be your own’s or deployed one) runs on muti user mode and closed all the files it has opened if any.

  • cgrinds April 25, 2011, 9:35 pm

    If you have cPanel, check /etc/profile.d/limits.sh. Although you shouldn’t change it here it’s possibly the root cause of your changes not sticking.

  • pep May 12, 2011, 10:51 am

    Could you please let me know , while we change the ulimit for root using “ulimit -n” , which is the configuration file in which it reflects.

  • Mukesh September 6, 2011, 7:44 am

    Thanks, it worked for me on Red Hat Linux 5.

  • khupcom September 21, 2011, 3:19 am

    Not working on my centos box.
    # su – nginx
    # ulimit -Hn
    1024
    # ulimit -Sn
    1024

    • Soham December 8, 2011, 10:30 am

      Try in /etc/security/limits.conf file and at the end of the file add this.

      * – nofile

      Logout and login again and check ulimit -n

  • Leonardo September 27, 2011, 5:10 pm

    Hi,

    Worked in RH 5.

    It´s very important to know if you have set the max-files for user xx, you must to start the application with the user xx…. if you are starting with another user, the changes does not effetct.

  • Nick October 26, 2011, 1:05 am

    You could also try

    ulimit -n xxxxx now

    where xxxxx is the number, e.g. 16384. Worked for me on CentOS 5.7.

    • Killjoy November 23, 2011, 7:13 pm

      Nice one, worked fine for me too on Debian 6.

    • Colin August 10, 2012, 7:22 pm

      That did it! This should be added to the main article, as just editing the limits.conf did not take immediate effect for me on Debian 6.

  • kamalakar November 1, 2011, 12:55 pm

    I am trying to increase ulimit on Ubuntu
    even after restarting changes are not reflected

    • mrcool December 26, 2011, 7:40 am

      u must edit config file ,if u increase the file descriptor value through the terminal means it shold be a temporary only ,if u want to permanently increase your ulimit mean u have to set the ulimit in config file. . . .

  • Baysie May 1, 2012, 8:20 am

    Modifying the /etc/security/limits.conf file didn’t seem to work for me at first – but then I realised that I needed to specify the domain for the user(s) as our systems use Active Directory authentication.

    For anyone else who uses Active Directory authentication, you should use something like:

    @DOMAIN+username          soft      nofile         10000
    @DOMAIN+username          hard    nofile         10000
    
    • nixCraft May 1, 2012, 9:28 am

      Interesting. I never used AD based auth.

      Appreciate your comment.

  • pau August 23, 2012, 8:10 am

    Even if you already have this in /etc/pam.d/login, you may also need to add the following to /etc/pam.d/common-session:

    session required pam_limits.so

  • Rahul.Patil August 29, 2012, 9:03 am

    Hi,
    how many maximum number of process(nproc) user can run in linux ?

  • Derek Shaw October 17, 2012, 5:35 am

    using ubuntu 64-bit, 10.04 desktop
    this fix does not work for the root user using the “wildcard” format.
    it does work for all other users.

    The complete solution for this config (as pointed out by Arstan, on April 23, 2008) is as follows
    (copied from my /etc/security/limits.conf)
    #added for samba testparm error
    # rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)
    * hard nofile 32000
    * soft nofile 16384
    root hard nofile 32000
    root soft nofile 16384

    I also edited /etc/pam.d/common-session, but this had no effect on the root user.

    I haven’t bothered to find out if smbd is run by another user (that is, whether the wildcard entries are really needed). In any event, the warning is quite misleading. According to a bug report/discussion on the samba.org website (from none other than Jeremy Allison), the message should really be along the lines that “Samba has increased your open file descriptors to meet the requirements of windows” http://lists.samba.org/archive/samba/2010-January/153320.html
    https://bugzilla.samba.org/show_bug.cgi?id=7898

    Cheers!
    d.

  • subhankar sengupta October 27, 2012, 10:02 pm

    Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
    edit /etc/pam.d/login, adding the line:
    session required /lib/security/pam_limits.so

    and then do
    #ulimit -n unlimited

    Note that you may need to log out and back in again before the changes take effect

  • Donny April 29, 2013, 8:05 pm

    file descriptors soft limits in /etc/secutiy/limits.conf doesnt work with csh
    # Maximum open files

    mqm hard nofile 10240
    mqm soft nofile 10240

    I had to add the below line in.cshrc file.
    limit descriptors 10240

  • LarrH September 6, 2013, 1:01 am

    It should be noted… that, if you set limits in /etc/security/limits.conf for a specific user… that user MAY have to logoff and then back on in order to ‘see’ the changes using the ‘ulimit -Hn’ or ‘ulimit -Sn’ command.

    Running processes, for the particular user, may have to be stopped and restarted as well.

  • Garrett N September 6, 2013, 2:45 pm

    This article was very helpful. Thank you for taking the time to write it.
    Cheers!!

  • dragun0v April 27, 2014, 4:03 am

    I have been running into an issue “too many open files”. I know that I can modify the max open files for that process/user but I have been trying to figure out a way to identify the files being opened in real time meaning if a process is opening files – just list the new files being opened.

    To do that I attach strace to that process and observe what new files are being opened. I noticed that that the number of open files are increasing but strace isn’t reporting any open/read/etc system calls. Strace is just stuck at futex(0x2ba8050739d0, FUTEX_WAIT, 16798, NULL. Can you suggest a way to figure out files being opened by the process in real time?

    Also, is this the only way to identify the reason for why a process/user is opening files/sockets?

  • Jayhoonova July 17, 2014, 2:18 pm

    Limit is 65536 but around 8000 open file.. I got too many open file error….
    :( Please help
    net.ipv4.ip_forward = 0
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    net.ipv4.tcp_syncookies = 1
    net.bridge.bridge-nf-call-ip6tables = 0
    net.bridge.bridge-nf-call-iptables = 0
    net.bridge.bridge-nf-call-arptables = 0
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    fs.file-max = 65536

  • Steven August 5, 2014, 11:29 am

    Note that on RHEL/CentOS 7 the limits-conf file is no longer used by applications with a SystemD startup script, for example MySQL will give you arrors like this regardless of the limits you set in sysctl.conf or limits.conf:

    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)
    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

    You now have to directly edit the service startup script, for example /usr/lib/systemd/system/mysqld.service

    and add LimitNOFILE=60000 to the [Service] section. I lost 2h with that.

    • MoZoY October 5, 2014, 2:10 am

      Oh man! Thank you Sir!
      I lost much more hours than you on that… the only work-around i had in place so far was to `SET GLOBAL max_connection = n` right after service start.

  • rakesh September 16, 2014, 9:02 am

    we have very high traffic web site and php-fpm normally breaches 65K limit, how can we increase file open limit for www-data user to 999999

  • Eric J. January 26, 2015, 2:48 pm

    # sysctl -w fs.file-max=100000
    Is there any reason I should set this to 100000 instead of 1000000 or 100000000? What is the idea behind having a maximum open file limit at all?

  • Tnk April 19, 2015, 2:02 pm

    Hi,on my cpanel server with suphp,with i type su – nobody,it shows “This account is currently not available.”,how can i tune my cpanel server ? thanks

Leave a Comment