Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?

The ulimit command provides control over the resources available to the shell and/or to processes started by it, on systems that allow such control. The maximum number of open file descriptors displayed with following command (login as the root user).

Command To List Number Of Open File Descriptors

Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max


75000 files normal user can have open in single login session. To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn

To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
In this example, su to oracle user, enter:
# su - oracle
$ ulimit -Hn
$ ulimit -Sn

System-wide File Descriptors (FD) Limits

The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

The Number Of Maximum Files Was Reached, How Do I Fix This Problem?

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:
# vi /etc/sysctl.conf
Append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
# sysctl fs.file-max

User Level FD Limits

The above procedure sets system-wide file descriptors (FD) limits. However, you can limit httpd (or any other users) user to specific limits by editing /etc/security/limits.conf file, enter:
# vi /etc/security/limits.conf
Set httpd user soft and hard limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240

Save and close the file. To see limits, enter:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn

A note about RHEL/CentOS/Fedora/Scientific Linux users

Edit /etc/pam.d/login file and add/modify the following line (make sure you get

session required

Save and close the file.

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 72 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersncdu pydf
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
72 comments… add one
  • Tachyon Oct 21, 2006 @ 2:35

    Icreasing the file handles is a good tip, but 5000 is very low these days. 200000 is more realistic for any modern system.
    Also, there’s no need to logout, just edit the /etc/sysctl.conf and then type ‘sysctl -p’ as root.


  • Maroon Ibrahim Jun 23, 2007 @ 12:25

    Does this command work for Debian and does it affect SQUID file descriptor too?
    Best Regards?

  • 🐧 nixCraft Aug 15, 2007 @ 18:07


    Yes it works on Debian and all other Linux systems/distros.

  • Sathish A Aug 28, 2007 @ 23:40

    how to increase in a Redhat linux server? How to find the location of sysctl.conf file or how to find in which file the limit has been set?

    thanks in advance

  • baka.tom Sep 11, 2007 @ 23:34

    i tried this on a CentOS (which by the way, i’ve decided the worst linux distribution ever), and it doesn’t seem to work. ulimit -n still says 1024, even after logout, even after reboot.

    • Diego Oct 4, 2016 @ 20:39

      I guess that CentOS is not the problem.

  • bourne Sep 13, 2007 @ 14:10

    /etc/sysctl.conf is good for the system-wide amount, but don’t forget that users also need different limits. See /etc/security/limits.conf (Debian, Redhat, SuSE all have it, probably most others as well) to assign specific limits on per-group, per-user, and default basises.

  • jason Sep 13, 2007 @ 20:55

    I am running “Red Hat Enterprise Linux ES release 4 (Nahant Update 5)” and followed the instructions above. Like “baka.tom”, I was unable to see the change reflected by typing “ulimit -n”. I don’t know if this is a problem, but it certainly reduces the credibility of this article (unless I screwed up, of course).

  • 🐧 nixCraft Sep 14, 2007 @ 4:32

    baka.tom / jason,

    The FAQ has been updated for latest kernel. It should work now. Let me know if you have any more problems.

    bourne, thanks for pointing out user level or group level filelimit option.

    I appreciate all feedback.

  • jackson Dec 1, 2007 @ 1:54

    Red Hat configuration requires the following line to be added for /etc/security/limits to work.

    in /etc/pam.d/login:

    session required
    • Michael Sep 16, 2015 @ 16:09

      This worked for me! It should be added into the original article if possible.

  • Arstan Apr 23, 2008 @ 2:23

    I’m trying to make 8192 on Ubuntu 7.10, adding

    * soft nofile 8192
    * hard nofile 8192

    doesn’t work, but when i do change * to username(lets say root) it applies.

    So how to change it system wide?

    • Tozz Aug 28, 2014 @ 11:21

      In Debian (and thus Ubuntu) the wildcard does _not_ work for root, only for regular users. If you want to change the limits for root you have to create additional lines specifically for root.

  • shankar Jun 19, 2009 @ 15:58

    you could use the following command to check if the given change reflected

    #ulimit -n -H

    that gives the hard value…

  • anonymous Jul 4, 2009 @ 13:33

    Funny, file-max has little to do with the number of open file descriptors :/

  • joseph bloe Jul 4, 2009 @ 13:34

    Funny, this has little to do with the number of file descriptors….. it merely reflects the number of open files one may have.

    open file != file descriptor :/

  • joseph bloe Jul 4, 2009 @ 13:36

    unfortunately, open file != file descriptor. These are two distinct and separate things.

    somehow only confusion has been added here.

    • V Mar 15, 2017 @ 10:35

      yes, but in which one we should be concerned? If you know article author made a mistake it should be good to narrow it if you know where he made it.

  • hywl51 Oct 28, 2009 @ 5:44

    “Use the following command command to display maximum number of open file descriptors:
    cat /proc/sys/fs/file-max


    75000 files normal user can have open in single login session. ”

    I think 75000 should mean the whole system can support 75000 open files at most , not for per user login.

  • Adam HP Feb 25, 2010 @ 9:18

    To clear up any confusion for increasing the limit on Red Hat 5.X systems:

    # echo “fs.file-max=70000” >> /etc/sysctl.conf
    # sysctl -p
    # echo “* hard nofile 65536” >> /etc/security/limits.conf
    # echo “session required” >> /etc/pam.d/login
    # ulimit -n -H

    In summary set your max file descriptors to a number higher than your hard security ‘nofile’ limit to leave room for the OS to run.

  • Ramesh Mar 22, 2010 @ 2:41

    Can anyone explain all the attributes in ulimit -a and how it impacts the performance of a system?

  • Thamizhananl P Apr 29, 2010 @ 9:11

    In Red hat enterprises linux(RHEL4 and RHEL5) after setting nofile limit we need to do below modification
    In /etc/security/limits.conf file added
    root soft nofile 5000
    root hard nofile 6000

    Edit the /etc/pam.d/system-auth, and add this entry:
    session required /lib/security/$ISA/

    It perfectly worked me.
    After this change open a new terminal and issue ulimit -a.
    There you could see the updated file descriptor value for root user.

  • Jeoffrey Jul 9, 2010 @ 1:16

    This is very good. Thanks for the post =)

  • jfd3 Aug 17, 2010 @ 20:14

    What is the difference between a hard limit and a soft limit?

    • Broshep McDaggles Mar 11, 2015 @ 16:41

      A hard limit cannot be increased by a non-root user once it is set; a soft limit may be increased up to the value of the hard limit. (Its buried deep in the man pages)

  • Francisco Sep 8, 2010 @ 21:15

    Is there any equivalent for MAC OS x (darwin) ??

    • Dirk Sep 11, 2010 @ 10:02

      Hi Francisco,

      when I issue “ulimit -n -H” on Mac OS X 10.6, it says “unlimited”. So I guess you don’t have to worry about it.


      • JP Jun 14, 2013 @ 16:37

        We actually found that to be not true or not something we think it is. We ran a test using Node.JS and there was a limit somewhere in the 240 range. Once we set the ulimit higher we were able to increase the connections.

  • Joel Dec 14, 2010 @ 10:15

    I’m using Linux (Debian Lenny) on a server. I would like to keep my ulimit -n settings.
    The values in /etc/security/limits.conf (soft and hard limits) and in /etc/sysctl.conf have been increased.
    /etc/pam.d/login constains the “session required”
    I’ve also put the “ulimit -n 50000” command in .bashrc
    … and after logout/login and/or ssh, ulimit -n still returns 1024! What tricky settings also need to be changed? These incoherent and over-complicated version-dependent settings really make linux unusable. I’d rather write code than waste my time on linux configuration files.

  • Joel Dec 14, 2010 @ 10:18

    In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

  • jee Jan 2, 2011 @ 16:19

    I have a problem about “too many open file “, i had changed all parameters,

    but this problem is exist.

    My system have a web application system, that have a dongle, I guess the problem

    maybe caused by dongle .

    Can you help me , thank you !

  • Terry Antonio Jan 25, 2011 @ 0:22

    Good Day Mate
    I was going to leave you in my will for this but the mortgage payments might be to high.
    Anyway long story short I run free radius and on a new server i built and it constantly reports “no db handles” if you check with the freeradius forum this is a common problem and is usually met with the kurt reply “figure out whats using all the descriptors”.
    I had in my mind I had encountered this problem many years ago and it was a matter of increasing the system handles but just could not track it down.
    From your article I found my mysql handles were set at 1024 I increased the soft limit to 5000 and the hl to 10000 and all is well in paradise again. Your bloods worth bottling
    Cheers Terry

  • mahuja Mar 21, 2011 @ 10:29

    >What is the difference between a hard limit and a soft limit?

    A user can decrease his own hard limit. Only root can increase his own hard limit.
    A user can decrease his soft limit, or increase it up to the hard limit; this is the effective limit.

    Say the default is Hard 1000 and soft 500 – that means you can only open 500 files unless you explicitly ask for more by increasing it to 1000. But you can’t get 1001 without root.

    >In the end, it suddenly worked, without changing anything more. How much time is needed before it’s taken into account? Strange and unreliable…

    The one piece of information you’re missing:

    New security settings, also including group memberships, only apply for sessions started after the change was done. Except commands that affect the current session only. A reboot will always clear out all the pre-change sessions that are still running.

  • Marco Smith Mar 23, 2011 @ 8:13

    Hi, Can someone please tell me why the amount of open files increase so rapidly when changing from Single user mode to multi user mode?

    • 🐧 nixCraft Mar 23, 2011 @ 9:27

      More users + More process + More background process == More open files

      • Marco Smith Mar 23, 2011 @ 9:32

        And what if you run, multi user mode from console, i.e without Windows X? Like for instance, what process/processes run that makes the number on increase SO high….like from 160 to 900 open files????

        • Thamizhannal Mar 23, 2011 @ 10:01

          Excluding operating system process(es), if any application runs on muti user mode and does not close the opened file, then it would create this issue. Please make sure that if any external application(may be your own’s or deployed one) runs on muti user mode and closed all the files it has opened if any.

  • cgrinds Apr 25, 2011 @ 21:35

    If you have cPanel, check /etc/profile.d/ Although you shouldn’t change it here it’s possibly the root cause of your changes not sticking.

  • pep May 12, 2011 @ 10:51

    Could you please let me know , while we change the ulimit for root using “ulimit -n” , which is the configuration file in which it reflects.

  • Mukesh Sep 6, 2011 @ 7:44

    Thanks, it worked for me on Red Hat Linux 5.

  • khupcom Sep 21, 2011 @ 3:19

    Not working on my centos box.
    # su – nginx
    # ulimit -Hn
    # ulimit -Sn

    • Soham Dec 8, 2011 @ 10:30

      Try in /etc/security/limits.conf file and at the end of the file add this.

      * – nofile

      Logout and login again and check ulimit -n

  • Leonardo Sep 27, 2011 @ 17:10


    Worked in RH 5.

    It´s very important to know if you have set the max-files for user xx, you must to start the application with the user xx…. if you are starting with another user, the changes does not effetct.

  • Nick Oct 26, 2011 @ 1:05

    You could also try

    ulimit -n xxxxx now

    where xxxxx is the number, e.g. 16384. Worked for me on CentOS 5.7.

    • Killjoy Nov 23, 2011 @ 19:13

      Nice one, worked fine for me too on Debian 6.

    • Colin Aug 10, 2012 @ 19:22

      That did it! This should be added to the main article, as just editing the limits.conf did not take immediate effect for me on Debian 6.

  • kamalakar Nov 1, 2011 @ 12:55

    I am trying to increase ulimit on Ubuntu
    even after restarting changes are not reflected

    • mrcool Dec 26, 2011 @ 7:40

      u must edit config file ,if u increase the file descriptor value through the terminal means it shold be a temporary only ,if u want to permanently increase your ulimit mean u have to set the ulimit in config file. . . .

  • Baysie May 1, 2012 @ 8:20

    Modifying the /etc/security/limits.conf file didn’t seem to work for me at first – but then I realised that I needed to specify the domain for the user(s) as our systems use Active Directory authentication.

    For anyone else who uses Active Directory authentication, you should use something like:

    @DOMAIN+username          soft      nofile         10000
    @DOMAIN+username          hard    nofile         10000
    • 🐧 nixCraft May 1, 2012 @ 9:28

      Interesting. I never used AD based auth.

      Appreciate your comment.

  • pau Aug 23, 2012 @ 8:10

    Even if you already have this in /etc/pam.d/login, you may also need to add the following to /etc/pam.d/common-session:

    session required

  • Rahul.Patil Aug 29, 2012 @ 9:03

    how many maximum number of process(nproc) user can run in linux ?

  • Derek Shaw Oct 17, 2012 @ 5:35

    using ubuntu 64-bit, 10.04 desktop
    this fix does not work for the root user using the “wildcard” format.
    it does work for all other users.

    The complete solution for this config (as pointed out by Arstan, on April 23, 2008) is as follows
    (copied from my /etc/security/limits.conf)
    #added for samba testparm error
    # rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)
    * hard nofile 32000
    * soft nofile 16384
    root hard nofile 32000
    root soft nofile 16384

    I also edited /etc/pam.d/common-session, but this had no effect on the root user.

    I haven’t bothered to find out if smbd is run by another user (that is, whether the wildcard entries are really needed). In any event, the warning is quite misleading. According to a bug report/discussion on the website (from none other than Jeremy Allison), the message should really be along the lines that “Samba has increased your open file descriptors to meet the requirements of windows”


  • subhankar sengupta Oct 27, 2012 @ 22:02

    Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
    edit /etc/pam.d/login, adding the line:
    session required /lib/security/

    and then do
    #ulimit -n unlimited

    Note that you may need to log out and back in again before the changes take effect

  • Donny Apr 29, 2013 @ 20:05

    file descriptors soft limits in /etc/secutiy/limits.conf doesnt work with csh
    # Maximum open files

    mqm hard nofile 10240
    mqm soft nofile 10240

    I had to add the below line in.cshrc file.
    limit descriptors 10240

  • LarrH Sep 6, 2013 @ 1:01

    It should be noted… that, if you set limits in /etc/security/limits.conf for a specific user… that user MAY have to logoff and then back on in order to ‘see’ the changes using the ‘ulimit -Hn’ or ‘ulimit -Sn’ command.

    Running processes, for the particular user, may have to be stopped and restarted as well.

  • Garrett N Sep 6, 2013 @ 14:45

    This article was very helpful. Thank you for taking the time to write it.

  • dragun0v Apr 27, 2014 @ 4:03

    I have been running into an issue “too many open files”. I know that I can modify the max open files for that process/user but I have been trying to figure out a way to identify the files being opened in real time meaning if a process is opening files – just list the new files being opened.

    To do that I attach strace to that process and observe what new files are being opened. I noticed that that the number of open files are increasing but strace isn’t reporting any open/read/etc system calls. Strace is just stuck at futex(0x2ba8050739d0, FUTEX_WAIT, 16798, NULL. Can you suggest a way to figure out files being opened by the process in real time?

    Also, is this the only way to identify the reason for why a process/user is opening files/sockets?

  • Jayhoonova Jul 17, 2014 @ 14:18

    Limit is 65536 but around 8000 open file.. I got too many open file error….
    :( Please help
    net.ipv4.ip_forward = 0
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    net.ipv4.tcp_syncookies = 1
    net.bridge.bridge-nf-call-ip6tables = 0
    net.bridge.bridge-nf-call-iptables = 0
    net.bridge.bridge-nf-call-arptables = 0
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    fs.file-max = 65536

  • Steven Aug 5, 2014 @ 11:29

    Note that on RHEL/CentOS 7 the limits-conf file is no longer used by applications with a SystemD startup script, for example MySQL will give you arrors like this regardless of the limits you set in sysctl.conf or limits.conf:

    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)
    2014-08-05 13:24:40 1721 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

    You now have to directly edit the service startup script, for example /usr/lib/systemd/system/mysqld.service

    and add LimitNOFILE=60000 to the [Service] section. I lost 2h with that.

    • MoZoY Oct 5, 2014 @ 2:10

      Oh man! Thank you Sir!
      I lost much more hours than you on that… the only work-around i had in place so far was to `SET GLOBAL max_connection = n` right after service start.

  • rakesh Sep 16, 2014 @ 9:02

    we have very high traffic web site and php-fpm normally breaches 65K limit, how can we increase file open limit for www-data user to 999999

  • Eric J. Jan 26, 2015 @ 14:48

    # sysctl -w fs.file-max=100000
    Is there any reason I should set this to 100000 instead of 1000000 or 100000000? What is the idea behind having a maximum open file limit at all?

  • Tnk Apr 19, 2015 @ 14:02

    Hi,on my cpanel server with suphp,with i type su – nobody,it shows “This account is currently not available.”,how can i tune my cpanel server ? thanks

  • david d. Nov 6, 2015 @ 15:56

    when the system boots initially, daemons start with their own startup scripts in init.d and ignore the limits.conf settings. the changes made in the limits.conf and pam.d files are only applied in new sessions, such as when a user starts or restarts a service. One solution i have found to set the nofile limit on start up for a service is

    1. make your changes in the /etc/security/limits.conf

    * soft nofile 9991
    * hard nofile 9992

    root soft nofile 9891
    root hard nofile 9892

    2. chkconfig servicename off
    # this is to stop the system from starting the service

    3. in /etc/rc.local

    su – root service httpd start

    # add a start command to start the service through here
    # this will act the same as if root was to start the service
    # which will apply the changes to the service on startup
    # i choose httpd just as an example

    i know its not the most secure method, and probably just a band-aid, but its saved
    me a lot of frustration

  • david d. Nov 6, 2015 @ 16:09

    also to check if the service received the new file limit after rebooting

    – ps aux | grep servicename
    – track the pid for the service
    – cat /proc/pid#/limits
    – and it should show the new limits

  • Pete Dec 16, 2015 @ 15:38

    Does anyone else see that a hard nofile set to unlimited breaks pam.d login process on RH6? I can set soft to unlimited, but as soon as I set hard to unlimited login even for root is broken.

  • cis Dec 26, 2015 @ 0:28

    The same happens on a Centos6 VM.
    I only have a root user there and I cannot log in to the machine. Any solution before recreate the VM ?

  • Daniel Chay Oct 22, 2016 @ 3:01

    U may have to be certain about the user name i had http set and its actually set in httpd.conf as centos

  • Jason Nov 17, 2016 @ 18:35

    In the Limits.conf can the Soft limit be defined as a value greater that the Hard limit? Like below..
    user soft nofile 65536
    user hard nofile 65534

  • Ashwin Apr 6, 2017 @ 20:16

    What determines the upper bound in the /etc/security/limits.conf.

    In my attempt I could see that if I set the value above 1024000, the value is treated as garbage and ulimit -n returns the default 1024 value.

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum