≡ Menu

Why command df and du reports different output?

You will never notice something like this on FreeBSD or Linux Desktop home system or your personal UNIX or Linux workstation. However, sometime on a production UNIX server you will notice that both df (display free disk space) and du (display disk usage statistics) reporting different output. Usually df will output a bigger disk usage than du.

If Linux or UNIX inode is deallocated you will see this problem. If you are using clustered system (file system such as GFS) you may see this scenario commonly.

Note following examples are FreeBSD and GNU/Linux specific.

Following is normal output of df and du for /tmp filesystem:
# df -h /tmp
Output:

Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1e    496M     22M    434M     5%    /tmp

Now type du command:
# du -d 0 -h /tmp/
Output:

22M    /tmp/

Why is there a mismatch between df and du outputs?

However, some time it reports different output (a bigger disk usage), for example:
# df -h /tmp/
Output:

Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1e    496M     39M    417M     9%    /tmp

Now type du command:
# du -d 0 -h /tmp/
Output:

 22M    /tmp/

As you see, both df and du reporting different output. Many new UNIX admin get confused with output (39M vs 22M).

Open file descriptor is main causes of such wrong information. For example if file called /tmp/application.log is open by third party application OR by a user and same file is deleted, both df and du reports different output. You can use lsof command to verify this:
# lsof | grep tmp
Output:

bash   594  root  cwd   VDIR  0,86      512      2 /tmp
bash   634  root  cwd   VDIR  0,86      512      2 /tmp
pwebd  635  root  cwd   VDIR  0,86      512      2 /tmp
pwebd  635  root  3rW   VREG  0,86 17993324     68 /tmp (/dev/ad0s1e)
pwebd  635  root   5u   VREG  0,86        0     69 /tmp (/dev/ad0s1e)
lsof   693  root  cwd   VDIR  0,86      512      2 /tmp
grep   694  root  cwd   VDIR  0,86      512      2 /tmp

You can see 17993324K file is open on /tmp by pwebd (our in house software) but deleted accidentally by me. You can recreate above scenario in your Linux, FreeBSD or Unixish system as follows:

First, note down /home file system output:
# df -h /home
# du -d 0 -h /home

If you are using Linux then use du as follows:
# du -s -h /tmp

Now create a big file:
# cd /home/user
# cat /bin/* >> demo.txt
# cat /sbin/* >> demo.txt

Login on other console and open file demo.txt using vi text editor:
# vi /home/user/demo.txt

Do not exit from vi (keep it running).

Go back to another console and remove file demo.txt
# rm demo.txt
Now run both du and df to see the difference.
# df -h /home
# du -d 0 -h /home

If you are using Linux then use du as follows:
# du -s -h /tmp

Login to another terminal and close vi.

Now close the vi and the root cause of the problem should be resoled, the du and df outputs should be correct.

Tweet itFacebook itGoogle+ itPDF itFound an error/typo on this page?

{ 24 comments… add one }

  • NykO18 August 17, 2007, 8:18 am

    Thank you so much, because your solution (although strange at first) worked perfectly on one of our servers that were stating 5.1 Go instead of 4.5 Go. Unfortunately, repeating the same operation on the second server that has the same problem didn’t worked. I’m stil looking for a solution.

  • Bu October 25, 2007, 2:19 pm

    Hello, I have the same problem, but the solution you proposed just don’t work for me.
    I have my /var folder quite full : 2.7go of 2.8go, that’s what’s indicated by “df” command, while “du” states only 628mo… Nothing strange pointed out by “lsof”, only a few files of less than 1mo showed up.
    Pretty annoying…
    Thanks in advance for any help

  • pankaj February 4, 2008, 11:49 am

    hi

    I have the same problem with TRUE64 Unix.

    I have fileset as /usr which is showing me 3021256KB used after df -k & after du -k it is showing me 1648760KB.

    Please give me solution to resolve this problem.

  • Martin July 10, 2008, 12:47 pm

    thanks a lot for this hint! It worked perfectly on my machine.

  • arnike March 12, 2009, 11:02 pm

    Hi
    I have all the same. Does anybody know how to solve it?

  • sonny June 16, 2009, 9:41 pm

    I HAVE A PROBLEM . I HAVE A GRAPHIC MODE DOWN DUE TO APPARENTLY NOT ENOUGH SPACE TO LOAD IT … WHEN I TYPE df -h it shows me this
    /dev/sda1 36G 35G 0 100% /
    varrun 244M 104K 244M 1% /var/run
    varlock 244M 0 244M 0% /var/lock
    procbususb 244M 88K 244M 1% /proc/bus/usb
    udev 244M 88K 244M 1% /dev
    devshm 244M 0 244M 0% /dev/shm
    lrm 244M 33M 211M 14% /lib/modules/2.6.20-15-generic/volatile
    WHICH SHOWS I HAVE NOT ENOUGH SPACE LEFT FOR THE GRAPHIC PROCESS TO RUN

    THEN RIGHT AT THE ROOT I TYPE THIS
    sudo du -sk *|sort -rn
    31170836 home
    2226844 usr
    2091700 var
    151540 lib
    17036 boot
    10788 etc
    6084 sbin
    4912 bin
    1324 root
    104 dev
    24 tmp
    16 lost+found
    12 media
    12 fdir1
    8 opt
    4 srv
    4 prueba3
    4 mnt
    4 initrd
    0 vmlinuz
    0 sys
    0 proc
    0 initrd.img
    0 cdrom
    WHICH SHOWS THAT MY HOME PARTION IS THE ONE CAUSING THE PROBLEM
    NEXT I DO THIS
    cd /home
    sudo du -sk *|sort -rn
    31169040 user1
    940 user2
    816 user3
    20 user4
    16 user5

    WHICH SHOWS ME THAT DIR user1 IS USING LOTS OF SPACE
    THEN I DO THIS
    cd user1
    sudo du -sk *|sort -rn
    and it shows me this

    808840 Desktop
    67712 archive1
    30168 archive2
    16112 archive3
    11556 archive4
    4116 archive5
    1504 archive6
    1024 archive7
    408 archive8
    168 archive9
    116 archive10
    32 archive11
    16 archive12
    16 archive13
    12 archive14
    12 archive15
    4 archive16
    4 archive17
    4 archive18
    4 archive19
    4 archive20
    4 archive21
    4 archive22
    0 archive23
    My problem IS THAT THE HEAVY ARCHIVES DON’T SEEM TO BE SHOWING THEMSELVES SO THAT I CAN ERASE THEM.. HOW CAN I LOCATE THEM?

  • Matt September 4, 2009, 5:47 pm

    Thank you, this was very helpful.

  • Chetan Rane September 14, 2009, 4:55 pm

    Hi,

    I have a similar problem, on our server we have a lvm partition which shows with df -h command
    /dev/mapper/VG01-LV01 549G 514G 7.3G 99% /u01
    it shows 99 % used, whereas if i do du -sh the output is as follows.
    du -sh /u01
    39G /u01/

    we checked the command as lsof and it show the deleted file with big size.
    my question how to resolve the problem, how can i free up the space.
    will a simple reboot solve it, as it is a production system i cannot just reboot the server.

    Thanks in advance.

    –Chetan

  • jnicol September 17, 2009, 5:26 pm

    Awesome, learn something new every day!

    lsof on my CentOS 5 server includes a handy note if a file with an open FD has been deleted:

    # lsof -n -P | grep deleted
    rsync 29911 root 3r REG 8,17 15496725683 26230786 /an/old/file (deleted)

    kill the process, problem solved!

    • rommy December 31, 2010, 6:27 am

      thanks, the problem solved!

  • Chetan Rane September 18, 2009, 1:42 pm

    Hi,

    My problem is solved now, I restarted mysql service, and it cleared the space for filesystem.

    Thanks
    Chetan

  • Jonas August 12, 2010, 5:30 pm

    Thank you! This article was very useful to me.

  • kunal January 2, 2011, 9:52 am

    Thanks for wonderful solution…

  • Ednardo Lobo February 28, 2011, 12:11 pm

    It’s possible copy (cp) or link (ln) file’s data after delete it (demo.txt), while vi is in running? In other words, label it (ie, name it) again?

    Thank you very much!

  • Redo May 20, 2011, 2:37 pm

    superb, very nice info. it worked well in solving my problem.

    thanks!

  • Thiyagarajan July 19, 2011, 12:27 pm

    Thank you ! for this wonderful stuff.

  • tim August 26, 2011, 4:30 am

    df and du measure two different things….

    du reports the space used by files and folders–even this is more than the file size. A few quick experiments on my system show that 4K is a minimum file size in terms of disk space.

    df reports the space used by the file system. This includes the overhead for journals and inode tables and such.

  • sathish December 13, 2011, 10:42 am

    my question if i have 3 volume gruops vg0,vg1,vg2 . and my question is one volume groups crashd so how to get that data and how to solve that?
    thank u.

  • Jim Knoll December 20, 2011, 3:15 pm

    one semi related thing….
    other than open files inode allocations can leave phantom disk space reported by du that is not reported by ls

    if some dir had crap loads of files, especially if they were long file names, the OS will create new inodes to hold all the dir info. Even when the files are rm * the size reported by du can be quite large. The fast way to re-coop this space it to remove the now empty dir.

  • darko January 9, 2012, 7:12 am

    It solves problem on one server, but second one have 12 GB diif on du and df. Nobody mention this situation, couse it is very rarely – it turns to be some files hidden under mounted directory.

  • jackass April 19, 2012, 9:51 am

    very usefull info

  • Pradeep Siwach November 25, 2012, 5:26 am

    Hi,
    My issue still not resolved, there is diff bet du and df. Please help
    df -h
    [root@42HN7R1 /]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 194G 180G 4.5G 98% /
    /dev/sdb2 2.5T 1.4T 1014G 58% /data
    tmpfs 12G 0 12G 0% /dev/shm

    du – result

    160M aircelfull_uniq_may
    8.0K amit
    4.0K BackupPC
    7.8M bin
    19M boot
    453M bsnlfull_uniq_may
    284K dev
    150M etc
    9.4G home
    336M lib
    27M lib64
    13M loopfull_unique_data_may
    16K lost+found
    8.0K media
    0 misc
    8.0K mnt
    0 net
    4.0K nishant_homebackup
    349M opt
    0 proc
    2.2G root
    39M sbin
    8.0K selinux
    0 sept_dvdrd_sort_uniq.txt
    8.0K srv
    13G swapfile
    0 sys
    4.0K test
    72K tftpboot
    24M tmp

    8.7G usr
    56G var
    90G total

  • nambuntong August 5, 2013, 9:30 am

    I follow instruction below not work with AIX 6.1, please give any solution for me.

    Thank,

  • Yoav Slama December 11, 2013, 9:41 am

    You are genius, you solved all of my problems.
    Thanks a lot!

Leave a Comment