Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

Posted on in Categories , , , , , , , last updated September 16, 2015

How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?

The ulimit command provides control over the resources available to the shell and/or to processes started by it, on systems that allow such control. The maximum number of open file descriptors displayed with following command (login as the root user).

Command To List Number Of Open File Descriptors

Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
Output:

75000

75000 files normal user can have open in single login session. To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn

To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
In this example, su to oracle user, enter:
# su - oracle
$ ulimit -Hn
$ ulimit -Sn

System-wide File Descriptors (FD) Limits

The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

The Number Of Maximum Files Was Reached, How Do I Fix This Problem?

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:
# vi /etc/sysctl.conf
Append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
OR
# sysctl fs.file-max

User Level FD Limits

The above procedure sets system-wide file descriptors (FD) limits. However, you can limit httpd (or any other users) user to specific limits by editing /etc/security/limits.conf file, enter:
# vi /etc/security/limits.conf
Set httpd user soft and hard limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240

Save and close the file. To see limits, enter:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn

A note about RHEL/CentOS/Fedora/Scientific Linux users

Edit /etc/pam.d/login file and add/modify the following line (make sure you get pam_limts.so):

session required pam_limits.so

Save and close the file.

72 comment

  1. Icreasing the file handles is a good tip, but 5000 is very low these days. 200000 is more realistic for any modern system.
    Also, there’s no need to logout, just edit the /etc/sysctl.conf and then type ‘sysctl -p’ as root.

    Thanks,
    Tachyon

  2. how to increase in a Redhat linux server? How to find the location of sysctl.conf file or how to find in which file the limit has been set?

    thanks in advance

  3. i tried this on a CentOS (which by the way, i’ve decided the worst linux distribution ever), and it doesn’t seem to work. ulimit -n still says 1024, even after logout, even after reboot.

  4. /etc/sysctl.conf is good for the system-wide amount, but don’t forget that users also need different limits. See /etc/security/limits.conf (Debian, Redhat, SuSE all have it, probably most others as well) to assign specific limits on per-group, per-user, and default basises.

  5. I am running “Red Hat Enterprise Linux ES release 4 (Nahant Update 5)” and followed the instructions above. Like “baka.tom”, I was unable to see the change reflected by typing “ulimit -n”. I don’t know if this is a problem, but it certainly reduces the credibility of this article (unless I screwed up, of course).

  6. baka.tom / jason,

    The FAQ has been updated for latest kernel. It should work now. Let me know if you have any more problems.

    bourne, thanks for pointing out user level or group level filelimit option.

    I appreciate all feedback.

  7. Red Hat configuration requires the following line to be added for /etc/security/limits to work.

    in /etc/pam.d/login:

    session required pam_limits.so
  8. I’m trying to make 8192 on Ubuntu 7.10, adding

    * soft nofile 8192
    * hard nofile 8192

    doesn’t work, but when i do change * to username(lets say root) it applies.

    So how to change it system wide?

    1. In Debian (and thus Ubuntu) the wildcard does _not_ work for root, only for regular users. If you want to change the limits for root you have to create additional lines specifically for root.

  9. you could use the following command to check if the given change reflected

    #ulimit -n -H

    that gives the hard value…

  10. Funny, this has little to do with the number of file descriptors….. it merely reflects the number of open files one may have.

    open file != file descriptor :/

  11. unfortunately, open file != file descriptor. These are two distinct and separate things.

    somehow only confusion has been added here.

    1. yes, but in which one we should be concerned? If you know article author made a mistake it should be good to narrow it if you know where he made it.

  12. “Use the following command command to display maximum number of open file descriptors:
    cat /proc/sys/fs/file-max
    Output:

    75000

    75000 files normal user can have open in single login session. ”

    I think 75000 should mean the whole system can support 75000 open files at most , not for per user login.

  13. To clear up any confusion for increasing the limit on Red Hat 5.X systems:

    # echo “fs.file-max=70000” >> /etc/sysctl.conf
    # sysctl -p
    # echo “* hard nofile 65536” >> /etc/security/limits.conf
    # echo “session required pam_limits.so” >> /etc/pam.d/login
    # ulimit -n -H
    65536

    In summary set your max file descriptors to a number higher than your hard security ‘nofile’ limit to leave room for the OS to run.

  14. In Red hat enterprises linux(RHEL4 and RHEL5) after setting nofile limit we need to do below modification
    In /etc/security/limits.conf file added
    root soft nofile 5000
    root hard nofile 6000

    Edit the /etc/pam.d/system-auth, and add this entry:
    session required /lib/security/$ISA/pam_limits.so

    It perfectly worked me.
    After this change open a new terminal and issue ulimit -a.
    There you could see the updated file descriptor value for root user.

    1. Hi Francisco,

      when I issue “ulimit -n -H” on Mac OS X 10.6, it says “unlimited”. So I guess you don’t have to worry about it.

      Dirk

      1. We actually found that to be not true or not something we think it is. We ran a test using Node.JS and there was a limit somewhere in the 240 range. Once we set the ulimit higher we were able to increase the connections.

  15. I’m using Linux (Debian Lenny) on a server. I would like to keep my ulimit -n settings.
    The values in /etc/security/limits.conf (soft and hard limits) and in /etc/sysctl.conf have been increased.
    /etc/pam.d/login constains the “session required pam_limits.so”
    I’ve also put the “ulimit -n 50000” command in .bashrc
    … and after logout/login and/or ssh, ulimit -n still returns 1024! What tricky settings also need to be changed? These incoherent and over-complicated version-dependent settings really make linux unusable. I’d rather write code than waste my time on linux configuration files.

  16. In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

  17. I have a problem about “too many open file “, i had changed all parameters,

    but this problem is exist.

    My system have a web application system, that have a dongle, I guess the problem

    maybe caused by dongle .

    Can you help me , thank you !

  18. Good Day Mate
    I was going to leave you in my will for this but the mortgage payments might be to high.
    Anyway long story short I run free radius and on a new server i built and it constantly reports “no db handles” if you check with the freeradius forum this is a common problem and is usually met with the kurt reply “figure out whats using all the descriptors”.
    I had in my mind I had encountered this problem many years ago and it was a matter of increasing the system handles but just could not track it down.
    From your article I found my mysql handles were set at 1024 I increased the soft limit to 5000 and the hl to 10000 and all is well in paradise again. Your bloods worth bottling
    Cheers Terry

  19. >What is the difference between a hard limit and a soft limit?

    A user can decrease his own hard limit. Only root can increase his own hard limit.
    A user can decrease his soft limit, or increase it up to the hard limit; this is the effective limit.

    Say the default is Hard 1000 and soft 500 – that means you can only open 500 files unless you explicitly ask for more by increasing it to 1000. But you can’t get 1001 without root.

    >In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

    The one piece of information you’re missing:

    New security settings, also including group memberships, only apply for sessions started after the change was done. Except commands that affect the current session only. A reboot will always clear out all the pre-change sessions that are still running.

  20. Hi, Can someone please tell me why the amount of open files increase so rapidly when changing from Single user mode to multi user mode?

      1. And what if you run, multi user mode from console, i.e without Windows X? Like for instance, what process/processes run that makes the number on increase SO high….like from 160 to 900 open files????

        1. Excluding operating system process(es), if any application runs on muti user mode and does not close the opened file, then it would create this issue. Please make sure that if any external application(may be your own’s or deployed one) runs on muti user mode and closed all the files it has opened if any.

  21. If you have cPanel, check /etc/profile.d/limits.sh. Although you shouldn’t change it here it’s possibly the root cause of your changes not sticking.

  22. Could you please let me know , while we change the ulimit for root using “ulimit -n” , which is the configuration file in which it reflects.

    1. Try in /etc/security/limits.conf file and at the end of the file add this.

      * – nofile

      Logout and login again and check ulimit -n

  23. Hi,

    Worked in RH 5.

    It´s very important to know if you have set the max-files for user xx, you must to start the application with the user xx…. if you are starting with another user, the changes does not effetct.

    1. That did it! This should be added to the main article, as just editing the limits.conf did not take immediate effect for me on Debian 6.

    1. u must edit config file ,if u increase the file descriptor value through the terminal means it shold be a temporary only ,if u want to permanently increase your ulimit mean u have to set the ulimit in config file. . . .

  24. Modifying the /etc/security/limits.conf file didn’t seem to work for me at first – but then I realised that I needed to specify the domain for the user(s) as our systems use Active Directory authentication.

    For anyone else who uses Active Directory authentication, you should use something like:

    @DOMAIN+username          soft      nofile         10000
    @DOMAIN+username          hard    nofile         10000
    
  25. Even if you already have this in /etc/pam.d/login, you may also need to add the following to /etc/pam.d/common-session:

    session required pam_limits.so

  26. using ubuntu 64-bit, 10.04 desktop
    this fix does not work for the root user using the “wildcard” format.
    it does work for all other users.

    The complete solution for this config (as pointed out by Arstan, on April 23, 2008) is as follows
    (copied from my /etc/security/limits.conf)
    #added for samba testparm error
    # rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)
    * hard nofile 32000
    * soft nofile 16384
    root hard nofile 32000
    root soft nofile 16384

    I also edited /etc/pam.d/common-session, but this had no effect on the root user.

    I haven’t bothered to find out if smbd is run by another user (that is, whether the wildcard entries are really needed). In any event, the warning is quite misleading. According to a bug report/discussion on the samba.org website (from none other than Jeremy Allison), the message should really be along the lines that “Samba has increased your open file descriptors to meet the requirements of windows” http://lists.samba.org/archive/samba/2010-January/153320.html
    https://bugzilla.samba.org/show_bug.cgi?id=7898

    Cheers!
    d.

  27. Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
    edit /etc/pam.d/login, adding the line:
    session required /lib/security/pam_limits.so

    and then do
    #ulimit -n unlimited

    Note that you may need to log out and back in again before the changes take effect

  28. file descriptors soft limits in /etc/secutiy/limits.conf doesnt work with csh
    # Maximum open files

    mqm hard nofile 10240
    mqm soft nofile 10240

    I had to add the below line in.cshrc file.
    limit descriptors 10240

  29. It should be noted… that, if you set limits in /etc/security/limits.conf for a specific user… that user MAY have to logoff and then back on in order to ‘see’ the changes using the ‘ulimit -Hn’ or ‘ulimit -Sn’ command.

    Running processes, for the particular user, may have to be stopped and restarted as well.

  30. I have been running into an issue “too many open files”. I know that I can modify the max open files for that process/user but I have been trying to figure out a way to identify the files being opened in real time meaning if a process is opening files – just list the new files being opened.

    To do that I attach strace to that process and observe what new files are being opened. I noticed that that the number of open files are increasing but strace isn’t reporting any open/read/etc system calls. Strace is just stuck at futex(0x2ba8050739d0, FUTEX_WAIT, 16798, NULL. Can you suggest a way to figure out files being opened by the process in real time?

    Also, is this the only way to identify the reason for why a process/user is opening files/sockets?

  31. Limit is 65536 but around 8000 open file.. I got too many open file error….
    :( Please help
    net.ipv4.ip_forward = 0
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    net.ipv4.tcp_syncookies = 1
    net.bridge.bridge-nf-call-ip6tables = 0
    net.bridge.bridge-nf-call-iptables = 0
    net.bridge.bridge-nf-call-arptables = 0
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    fs.file-max = 65536

  32. Note that on RHEL/CentOS 7 the limits-conf file is no longer used by applications with a SystemD startup script, for example MySQL will give you arrors like this regardless of the limits you set in sysctl.conf or limits.conf:

    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)
    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

    You now have to directly edit the service startup script, for example /usr/lib/systemd/system/mysqld.service

    and add LimitNOFILE=60000 to the [Service] section. I lost 2h with that.

    1. Oh man! Thank you Sir!
      I lost much more hours than you on that… the only work-around i had in place so far was to `SET GLOBAL max_connection = n` right after service start.

  33. we have very high traffic web site and php-fpm normally breaches 65K limit, how can we increase file open limit for www-data user to 999999

  34. # sysctl -w fs.file-max=100000
    Is there any reason I should set this to 100000 instead of 1000000 or 100000000? What is the idea behind having a maximum open file limit at all?

  35. Hi,on my cpanel server with suphp,with i type su – nobody,it shows “This account is currently not available.”,how can i tune my cpanel server ? thanks

  36. when the system boots initially, daemons start with their own startup scripts in init.d and ignore the limits.conf settings. the changes made in the limits.conf and pam.d files are only applied in new sessions, such as when a user starts or restarts a service. One solution i have found to set the nofile limit on start up for a service is

    1. make your changes in the /etc/security/limits.conf

    * soft nofile 9991
    * hard nofile 9992

    root soft nofile 9891
    root hard nofile 9892

    2. chkconfig servicename off
    # this is to stop the system from starting the service

    3. in /etc/rc.local

    su – root service httpd start

    # add a start command to start the service through here
    # this will act the same as if root was to start the service
    # which will apply the changes to the service on startup
    # i choose httpd just as an example

    i know its not the most secure method, and probably just a band-aid, but its saved
    me a lot of frustration

  37. also to check if the service received the new file limit after rebooting

    – ps aux | grep servicename
    – track the pid for the service
    – cat /proc/pid#/limits
    – and it should show the new limits

  38. Does anyone else see that a hard nofile set to unlimited breaks pam.d login process on RH6? I can set soft to unlimited, but as soon as I set hard to unlimited login even for root is broken.

  39. The same happens on a Centos6 VM.
    I only have a root user there and I cannot log in to the machine. Any solution before recreate the VM ?

  40. In the Limits.conf can the Soft limit be defined as a value greater that the Hard limit? Like below..
    user soft nofile 65536
    user hard nofile 65534

  41. What determines the upper bound in the /etc/security/limits.conf.

    In my attempt I could see that if I set the value above 1024000, the value is treated as garbage and ulimit -n returns the default 1024 value.

Leave a Comment