You need to use disk scrubbing program such as scrub. It overwrites hard disks, files, and other devices with repeating patterns intended to make recovering data from these devices more difficult.
Tutorial details | |
---|---|
Difficulty | Intermediate (rss) |
Root privileges | Yes |
Requirements | scrub + screen + ssh |
Time | N/A |
Install scrub on RHEL / CentOS / Fedora Linux
Type the following yum command to install the scrub software:
# yum install scrub
Sample outputs:
Loaded plugins: product-id, protectbase, rhnplugin, subscription-manager Updating certificate-based repositories. Unable to read consumer identity 0 packages excluded due to repository protections Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package scrub.x86_64 0:2.2-1.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: scrub x86_64 2.2-1.el6 rhel-x86_64-server-6 34 k Transaction Summary ================================================================================ Install 1 Package(s) Total download size: 34 k Installed size: 0 Is this ok [y/N]: y Downloading Packages: scrub-2.2-1.el6.x86_64.rpm | 34 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : scrub-2.2-1.el6.x86_64 1/1 Installed products updated. Verifying : scrub-2.2-1.el6.x86_64 1/1 Installed: scrub.x86_64 0:2.2-1.el6 Complete!
How do I use scrub?
The syntax is:
scrub fileNameHere scrub file.txt
Sample outputs:
scrub: using NNSA NAP-14.x patterns scrub: padding file.txt with 4029 bytes to fill last fs block scrub: scrubbing file.txt 4096 bytes (~4KB) scrub: random |................................................| scrub: random |................................................| scrub: 0x00 |................................................| scrub: verify |................................................|
How do I select the pattern to write?
Use the following sytnax:
scrub -p nnsa|dod|bsi|old|fastold|gutmann|random|random2 fileNameHere
Where,
Select the patterns to write. nnsa selects patterns compliant
with NNSA Policy Letter NAP-14.x; dod selects patterns compliant
with DoD 5220.22-M; bsi selects patterns recommended by the Ger-
man Center of Security in Information Technologies
(http://www.bsi.bund.de); old selects pre-version 1.7 scrub pat-
terns; and fastold is old without the random pass. gutmann is a
35-pass sequence described in Gutmann’s paper cited below. See
STANDARDS below for more detail. random is a single random
pass. random2 is two random passes. Default: nnsa.
However, from the man page:
The effectiveness of scrubbing regular files through a file system will be limited by the OS and file system. File systems that are known to be problematic are journaled, log structured, copy-on-write, versioned, and network file systems. If in doubt, scrub the raw disk device. In other words, you need to scrub the eniter raw device such as /dev/sdb or /dev/sdvf.
Sample command:
scrub -p dod /dev/sdvf
How do I use scrub over the remote ssh session?
First, login using the ssh client. Next, start a screen manager that multiplexes a physical terminal between several processes such as screen or tmux:
$ ssh -i my.aws.appkey.pem user@ec2-46-51-239-52.ap-northeast-1.compute.amazonaws.com
$ sudo -s
# screen
# scrub -p dod /dev/sdvf
You can now close the ssh session and logout. It may take several hours or days. It all depends upon your EBS storage volumes size.
How do I securely delete selected files?
Use the find command as follows over the ssh+screen session:
### delete all php files ### # find /path/to/ebs/mount/location -type f -iname \*.php -print0 | xargs -0 -I{} scrub {} ### delete all *.sql files ### # find /path/to/ebs/mount/location -type f -iname \*.sql -print0 | xargs -0 -I{} scrub {}
Editor’s note: The commands discussed here should work with any Linux distribution and any block level storage such as local, removable, raid based, cloud based and so on.
🐧 8 comments so far... add one ↓
Category | List of Unix and Linux commands |
---|---|
File Management | cat |
Firewall | Alpine Awall • CentOS 8 • OpenSUSE • RHEL 8 • Ubuntu 16.04 • Ubuntu 18.04 • Ubuntu 20.04 |
Network Utilities | dig • host • ip • nmap |
OpenVPN | CentOS 7 • CentOS 8 • Debian 10 • Debian 8/9 • Ubuntu 18.04 • Ubuntu 20.04 |
Package Manager | apk • apt |
Processes Management | bg • chroot • cron • disown • fg • jobs • killall • kill • pidof • pstree • pwdx • time |
Searching | grep • whereis • which |
User Information | groups • id • lastcomm • last • lid/libuser-lid • logname • members • users • whoami • who • w |
WireGuard VPN | Alpine • CentOS 8 • Debian 10 • Firewall • Ubuntu 20.04 |
I believe AWS would have proper mechanism in place to prevent this. Otherwise nobody won’t be used this service at all.
Are you positive AWS doesn’t make backup copies of instances? That’s one of the drawbacks of cloud services – nothing is guaranteed to be permanently deleted.
I have been with a few VPS providers. They have told me that when an account is cancelled, they destroy the entire virtual machine. And it is no longer possible to recover the data. Is it possible to recover the data from an other VPS on the same server?
As you can see from this research, it’s better to delete confidential data instead of just relying on your cloud provider to do it:
http://www.contextis.co.uk/research/blog/dirtydisks/
I suggest you take a look at aws.amazon.com/security All decommissioned space is zeroed. This is still relevant for a number of other cloud services I can think of however – also any standard hosting you may manage.
I still think it’s better to securely remove any data you might have, than to rely on your provider.
BTW:
dd if=/dev/uranom of=/your/disk
is also a way to securely remove any traces of your data.
Hi,
Thanks, useful article
Even better: architect the solution better? Go private cloud if your data shouldnt be “wandering” around.
Also, you can scrub the disks and also let AWS do it later. No harm in that. It can be CPU intenstive though, not sure if you are paying for IOPS?