≡ Menu

multiple files

How To Tail (View) Multiple Files on UNIX / Linux Console

The tail command is one of the best tool to view log files in a real time using tail -f /path/to/log.file syntax on a Unix-like systems. The program MultiTail lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). This is one of those dream come true program for UNIX sys admin job. You can browse through several log files at once and do various operations like search for errors and more.
[click to continue…]

Linux / UNIX Bash: Copy Set of Files to All Users Home Directory

If you would like to copy a set of files for all existing users, use the following scripting trick. It will save lots of manual work.
[click to continue…]

Shred tip: Securely remove multiple files so no one can recover file again

Shred utility overwrites a file to hide its contents, and optionally delete it if needed. The idea is pretty simple as it overwrites the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. By default file is overwritten 25 times. I've seen cases where law enforcement agencies had successfully recovered data from 5 year old *not so* working hard disk as evidence. Also when you move your rented server you should consider running file shredding; otherwise new owner can get data including passwords.

Shred a single file

Securely delete a file called /home/vivek/login.txt:
$ shred -u ~/login.txt

You can add a final overwrite with zeros to hide shredding:
$ shred -u -x ~/login.txt


  • -u : Remove file after overwriting
  • -x : Add a zero to hide shredding
  • -n NUM : Overwrite NUM times instead of the default 25

Shred a multiple files

Let us say you have 100 subdirectories and just wanted to get rid of all files:
$ find -t f . -exec shred -u '{}' \;

If you have 1000s of files consider a running job in background using nohup - (execute commands after you exit from a shell prompt over ssh session):
$ nohup find -t f /var/www/ -exec shred -n30 -u '{}' \; &

Shred drawbacks

  • Shred doesn’t go well with log-structured or journaled file systems, such as JFS, ReiserFS, XFS, Ext3, etc.
  • Compressed file systems
  • RAID-based file systems
  • NETApps (Network Appliance’s) NFS server

So how do I wipe on journaling file systems?

There is no simple solution. I’ve tried different techniques.

You can store sensitive data on ext2 or fat32 file system and easily delete files. According to shred man page:

In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal mode, which journals file data in addition to just metadata. In both the data=ordered (default) and data=writeback modes, shred works as usual. Ext3 journaling modes can be changed by adding the data=something option to the mount options for a particular file system in the /etc/fstab file, as documented in the mount man page (man mount).

Someone suggested to use disk encryption to store data that needs to be wiped.

Run shred on entire partition:
# shred -n 30 -vz /dev/hdb2

On remote computer, use nohup:
# nohup shred -n 30 -vz /dev/sdb1 &

shred: /dev/sdb1: pass 1/26 (random)...
shred: /dev/sdb1: pass 1/26 (random)...1013MiB/234GiB 0%
shred: /dev/sdb1: pass 1/26 (random)...1014MiB/234GiB 0%
shred: /dev/sdb1: pass 1/26 (random)...1.9GiB/234GiB 0%
shred: /dev/sdb1: pass 1/26 (random)...2.0GiB/234GiB 0%
shred: /dev/sdb1: pass 1/26 (random)...3.0GiB/234GiB 1%
shred: /dev/sdb1: pass 1/26 (random)...3.1GiB/234GiB 1%
shred: /dev/sdb1: pass 1/26 (random)...4.0GiB/234GiB 1%
shred: /dev/sdb1: pass 1/26 (random)...4.1GiB/234GiB 1%
shred: /dev/sdb1: pass 1/26 (random)...5.0GiB/234GiB 2%
shred: /dev/sdb1: pass 1/26 (random)...5.1GiB/234GiB 2%
shred: /dev/sdb1: pass 1/26 (random)...6.1GiB/234GiB 2%

And finally you can always destroy hard disk physically, perhaps through a hard drive in hot melting metal ;)

If you just need to securely wipes the hard disks use dban.

Do you use any other utility for file shredding or file wiping? Do you have a better solution for file wiping on journaling file systems? Please share your experience in the comments!