NFS Stale File Handle error and solution

Sometime NFS can result in to weird problems. For example NFS mounted directories sometimes contain stale file handles. If you run command such as ls or vi you will see an error:
$ ls
.: Stale File Handle

First let us try to understand the concept of Stale File Handle. Managing NFS and NIS, 2nd Edition book defines filehandles as follows (a good book if you would like to master NFS and NIS):
A filehandle becomes stale whenever the file or directory referenced by the handle is removed by another host, while your client still holds an active reference to the object. A typical example occurs when the current directory of a process, running on your client, is removed on the server (either by a process running on the server or on another client).

So this can occur if the directory is modified on the NFS server, but the directories modification time is not updated.

How do I fix this problem?

a) The best solution is to remount directory from the NFS client using mount command:
# umount -f /mnt/local
# mount -t nfs nfsserver:/path/to/share /mnt/local

First command (umount) forcefully unmount a disk partition /mnt/local (NFS).

(b) Or try to mount NFS directory with the noac option. However I don’t recommend using noac option because of performance issue and Checking files on NFS filesystem referenced by file descriptors (i.e. the fcntl and ioctl families of functions) may lead to inconsistent result due to the lack of consistency check in kernel even if noac is used.

🐧 If you liked this page, please support my work on Patreon or with a donation.
🐧 Get the latest tutorials on SysAdmin, Linux/Unix, Open Source & DevOps topics via:
CategoryList of Unix and Linux commands
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network Utilitiesdig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
44 comments… add one
  • Nelson Nov 25, 2017 @ 11:18

    I had same problem.

    When using df:

    “df: `/mnt/nfs_share_name’: Stale NFS file handle”

    But was not possible to me reboot the server.

    The solution is unmounting all Stale NFS fila handle shares and then mount again.

    But “mount -t nfs nfsserver:/path/to/nfs/share /mnt/destination” was giving me this error:
    “mount.nfs: Stale NFS file handle”

    After this, i mount nfs share with next command:
    “mount -t nfs -o ro,soft,rsize=32768,wsize=32768 NFS_HOST:/path_to_nfs_share /mnt/mount_destination”

    In my case, i use mounts with read only permissions and don’t want them to mount on boot, so there is no reference to it on /etc/fstab.

    If you need to read and write, you have to use “rw” instead “ro”.

    Hope it helps.


  • edwin Mar 10, 2017 @ 13:10

    I´ve found these error started when mounting the nfs share on an AIX machine, and created a directory. just rebooted and worked again.

  • Mario Becroft Jan 12, 2016 @ 11:25

    So much misinformation in this thread! It is not true at all that rebooting the NFS server should lead to stale file handles. Indeed, NFS was designed to be a stateless protocol where the server could be rebooted, or indeed migrated to a different host, without interruption to clients (besides a delay in response to NFS requests while the server is down).

    If this is not happening then your NFS configuration is broken. There are numerous ways this breakage can happen on the Linux NFS server. One way is if you do not specify the fsid option in your /etc/exports, and your NFS server decides to automatically assign a different fsid portion of the file handle after a reboot. Whether this will be a problem depends on your configuration.

    Another way things can break is if you start up the NFS daemon on the NFS server and make it available on the network before having exported all filesystems. It’s crucial when using NFS in a HA environment that the virtual HA IP address is only added to a server after an NFS failover once all exports have been loaded on the server.

    About the only situation in a correctly configured NFS environment where you will get stale NFS file handle and have to remount filesystems on the client is if the server was restored from a filesystem-level (not block-level) backup, leading to files having different inode numbers and therefore different NFS file handles. There is no simple way around this issue.

    This just touches the surface of some of the possible issues that could lead to the type of problems OP describes. Having to unmount then remount filesystems is *not* the expected behaviour of a correctly configured NFS environment.

  • martin Nov 10, 2015 @ 4:34

    You need to reboot. this resolv the issue of files with ????????? ? ? ? ? ?

  • Amar Joshi Feb 3, 2015 @ 10:05

    I am facing issue on my NFS filesystem. When i try to access directories it displays unknow erroe

    STGDPMWEB1:/shareddata # cd STP
    -bash: cd: STP: Unknown error 521
    i am using below command to mount… any one can help plsss

    mount -t nfs4 NSDLSTAG:/HFS/shareddata /shareddata/

    STGDPMWEB1:/shareddata # ls -lart
    ls: cannot access uploadfiles: Unknown error 521
    ls: cannot access downloadfiles: Unknown error 521
    ls: cannot access mail: Unknown error 521
    ls: cannot access MessagesExportedFromProjectWeb: Unknown error 521
    ls: cannot access STP: Unknown error 521
    ls: cannot access abcd: Unknown error 521
    total 5
    d????????? ? ? ? ? ? uploadfiles
    d????????? ? ? ? ? ? mail
    d????????? ? ? ? ? ? downloadfiles
    -????????? ? ? ? ? ? abcd
    d????????? ? ? ? ? ? STP
    d????????? ? ? ? ? ? MessagesExportedFromProjectWeb
    drwxr-xr-x 28 root root 4096 Feb 3 11:35 ..
    drwxrwxrwx 7 root bin 512 Feb 3 13:21 .

  • Anderson Oct 13, 2014 @ 10:31

    Thank you! You helped me to fix my problem.

  • Joshua Jul 21, 2014 @ 6:02

    It worked for me. I forcefully unmounted and mounted again.
    Thank you.

  • netc May 6, 2014 @ 11:24

    My solution is reboot. Nothing else helped me. Ubuntu 12.04.4

  • Debian user Apr 28, 2014 @ 12:48

    I have tried most of these proposals on Debian — nothing works but reboot.

  • Anthony Mar 20, 2014 @ 16:49

    Thank you. My Yum was locking up and when I did trace it was a stale nfs.

    strace yum -y update {package}

    I unmounted and remounted and it worked again. Explains why yum and server issues.

  • Amit Mar 6, 2014 @ 4:54

    I had this problem and I was sure that files/directory are still available at host.
    So, I just back traced one directory and came back on current directory, everything was working then.

  • Rhys Feb 1, 2014 @ 9:01

    I found that i could not umount -f /path always getting the stale message.
    when i looked at my server i could see that in the exportfs -av that the ip address listed was not what i was connected on anymore. Looking at my router i found that i had a Dynamic DHCP address. I added and a reservastion for my MAC address on my old IP address and reconnected my wireless then mounted and everything worked as before.

  • Paul Freeman Jan 6, 2014 @ 10:17


    exportfs -f

    on the server first, if that does not work then on the client try :

    mount -o remount  /path

    if that fails with device is busy/in-use, find the offending processes with:

    fuser -fvm /path

    and retry remount

    • Michael Burgener Mar 3, 2016 @ 1:57

      I’ve been using NFS for the better part of 20 years and have run into this problem off and on but never found a solution until I came across this post.

      exportfs -f on the server did the trick for me.

      Thanks! This just got added into my toolbox if sysadmin tricks.

    • Alan Jan 2, 2017 @ 23:08

      Have this error from time to time. Tried all solutions mentioned up to here.

      Thanks Paul Freeman, exportfs -f was new to me and solved the problem without restart nfs or the server.

  • NoFingers Sep 30, 2013 @ 10:02

    The `ls -alh’ always works fine with me. I just apply it at the parent directory of the one causing the error. After that, I can access to the directory/file without any problem.

  • Nicola Sep 12, 2012 @ 11:54

    I got same issue (mount.nfs: Stale NFS file handle) the first time I’ve attempted to mount a shared folder.
    I dont really have anything to umount of in busy state

    Any idea appreciated


  • Davinken Aug 31, 2012 @ 22:08

    This problem is haunting me in newer Fedora 17 installs. Autofs works fine, but is not timing out the mounts after the resource is left. So they seem to stay alive until something as the remote host being rebooted happens.
    Then the stale mount thing…
    But after forcing the umount (which seems to fix things partially, no more stale mount alerts), now I get:
    Too many levels of symbolic links

    Any hint on this ?

  • passingby Aug 4, 2012 @ 17:56

    if you are running nautilus file manager, you’ll probably find the problem is nautilus, not NFS at all. try “killall nautilus” from any shell prompt. works for me, so far..

  • Tommi Jul 1, 2012 @ 19:55

    Thanks to Nirmal Pathak. -f was not enough…

Leave a Reply

Your email address will not be published. Required fields are marked *

Use HTML <pre>...</pre>, <code>...</code> and <kbd>...</kbd> for code samples.