SSH: Rotate backup shell script to remove directories (old backup files)

Posted on in Categories Backup, Data recovery, Howto, RedHat/Fedora Linux, Security, Shell scripting, Sys admin, Tips, Ubuntu Linux, UNIX last updated October 9, 2007

Most time you have a limited space on the remote SFTP/ SSH backup server. Here is the script that periodically cleanup old backup files from the server i.e it will remove old directories.


Script will automatically calculate date from today’s date. By default it will keep only last 7 days backup on server. You can easily increase / decrease this limit. In order to run script you must meet the following criteria:

  • Remote SSH server with rm command execution permission
  • SSH Keys for password less login (see how to setup RSA / DSA keys for password less login)
  • Accurate date and time on local system (see how to synchronize clock using ntpdate ntp client)
  • Remote backup directory must be in dd-mm-yyyy or mm-dd-yyyy format. For example daily mysql backup should be stored in /mysql/mm-dd-yyyy format.

Sample Script Usage

Run the script as follows:
./ 7 /mysql "rm -rf"

  • 7 : Remove last 7 days files
  • /mysql : Base directory to clean up. If todays date is 9/Oct/2007, it will remove last 7 days directory /mysql/02-10-2007, /mysql/01-10-2007, …. /mysql/26-09-2007, /mysql/25-09-2007. It means script will only keep last 7 days backup on remote sftp / ssh server.
  • rm -rf : Command to run on directory structure

Sample Shell Script

Install following script:

if [ "$#" == "0" ];then
  echo "$0 upper-limit path {command}"
  exit 1
### SSH Server setup ###
DIR_FORMAT="%d-%m-%Y" # DD-MM-YYYY format
#DIR_FORMAT="%m-%d-%Y" #MM-DD-YYYY format
## do not edit below ##
LIMIT=$( expr $START + $1 )

## default CMD ##

[ "$3" != "" ] && CMD="$3" || :
[ "$2" != "" ] && SSH_PATH="$2" || :

DAYS=$(for d in $(seq $START $LIMIT);do date --date="$d days ago" +"${DIR_FORMAT}"; done)
for d in $DAYS
  ssh ${SSH_USER}@${SSH_SERVER} ${CMD} ${SSH_PATH}/$d

Run above script via cron tab (cronjob):
@daily /path/to/rot.ssh.script 7 "/html" "rm -rf"
@daily /path/to/rot.ssh.script 7 "/mysql" "rm -rf"

Quickly Backup / dump MySql / Postgres database to another remote server securely

Posted on in Categories Backup, FreeBSD, Howto, Linux, MySQL, Postgresql, RedHat/Fedora Linux, Sys admin, Tips, UNIX last updated October 1, 2007

Using UNIX pipe concept one can dump database to another server securely using ssh protocol. All you need remote execution rights for the ‘dd’ command, over SSH. This allows you to run database dumps across an encrypted channel.

Dump Postgres Database using ssh

Use pg_dump command command:
pg_dump -U USERNAME YOUR-DATABASE-NAME | ssh [email protected] "dd of=/pgsql/$(date +'%d-%m-%y')"

Dump MySQL Database using ssh

Type the following command:
mysqldump -u USERnAME -p'PASSWORD' YOUR-DATABASE-NAME | ssh [email protected] "dd of=/mysql/$(date +'%d-%m-%y')"

Copy hard disk or partition image to another system using a network and netcat (nc)

Posted on in Categories Backup, CentOS, Data recovery, Debian Linux, File system, FreeBSD, Gentoo Linux, Howto, RedHat/Fedora Linux, Suse Linux, Sys admin, Tips, Ubuntu Linux last updated August 12, 2007

netcat utility (nc command) considered as TCP/IP swiss army knife. It reads and writes data across network connections, using TCP or UDP protocol. It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.

I also install the netcat package for administering a network and you’d like to use its debugging and network exploration capabilities.

One my favorite usage is to migrating data between two server hard drives using netcat over a network. It is very easy to copy complete drive image from one server to another.

You can also use ssh for the same purpose, but encryption adds its own overheads. This is tried and trusted method (hat tip to karl) .

Make sure you have backup of all important data.

Install netcat

It is possible that nc may not be installed by default under Redhat / CentOS / Debian Linux.

Insall nc under Redhat / CentOS / Fedora Linux

Use yum command as follows:
# yum install nc

Loading "installonlyn" plugin
Loading "rhnplugin" plugin
Setting up Install Process
Setting up repositories
rhel-x86_64-server-vt-5   100% |=========================| 1.2 kB    00:00
rhel-x86_64-server-5      100% |=========================| 1.2 kB    00:00
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Downloading header for nc to pack into transaction set.
nc-1.84-10.fc6.x86_64.rpm 100% |=========================| 6.9 kB    00:00
---> Package nc.x86_64 0:1.84-10.fc6 set to be updated
--> Running transaction check

Dependencies Resolved

 Package                 Arch       Version          Repository        Size
 nc                      x86_64     1.84-10.fc6      rhel-x86_64-server-5   56 k

Transaction Summary
Install      1 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 56 k
Is this ok [y/N]: y
Downloading Packages:
(1/1): nc-1.84-10.fc6.x86 100% |=========================|  56 kB    00:00
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing: nc                           ######################### [1/1]

Installed: nc.x86_64 0:1.84-10.fc6

Debian / Ubuntu Linux netcat installation

Simply use apt-get command:
$ sudo apt-get install netcat

WARNING! These examples may result into data loss, ensure there are good backups before doing this, as using command wrong way can be dangerous.

How do I use netcat to copy hard disk image?

Our sample setup

HostA //
HostB //

Your task is copy HostA /dev/sda to HostB’s /dev/sdb using netcat command. First login as root user

Command to type on hostB (receiving end ~ write image mode)

You need to open port on hostB using netcat, enter :
# netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb

  • -p 2222 : Specifies the source port nc should use, subject to privilege restrictions and availability. Make sure port 2222 is not used by another process.
  • -l : Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host.
  • bzip2 -d : Compresses image using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. This will speed up network transfer ( -d : force decompression mode)
  • dd of=/dev/sda : /dev/sda is your hard disk. You can also specify partition such as /dev/sda1

Command to type on hostA (send data over a network ~ read image mode)

Now all you have to do is start copying image. Again login as root and enter:
# bzip2 -c /dev/sda | netcat hostA 2222
OR use IP address:
# bzip2 -c /dev/sda | netcat 2222

This process takes its own time.

A note about latest netcat version 1.84-10 and above

If you are using latest nc / netcat version above syntax will generate an error. It is an error to use -l option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored. So use nc command as follows.

On hostA, enter:
# nc -l 2222 > /dev/sdb
On hostB, enter:
# nc hostA 2222< /dev/sda
# nc 2222< /dev/sda

Using a second machine (hostB), connect to the listening nc process at 2222 (hostA), feeding it the file (/dev/sda)which is to be transferred. You can use bzip2 as follows.
On hostA, enter:
# nc -l 2222 | bzip2 -d > /dev/sdb
On hostB, enter:
# bzip2 -c /dev/sda | nc 2222

Further readings

How do I improve performance?

As suggested by anonymous user:

You should definitely use bs=16M or something like that. Otherwise, the copy will take forever. Copying a 300 GB hard drive over a 1 Gbps cross-over cable took about 1 1/2 hours or so using bs=16M Without this option, the same thing would have taken about 7 hours.

In short use command as follows:
# netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb bs=16M

Updated for accuracy.

Redhat Enterprise Linux securely mount remote Linux / UNIX directory or file system using SSHFS

Posted on in Categories Backup, CentOS, File system, Howto, Linux, RedHat/Fedora Linux, Security, Sys admin, Tips last updated May 9, 2007

You can easily mount remote server file system or your own home directory using special sshfs and fuse tools.

FUSE – Filesystem in Userspace

FUSE is a Linux kernel module also available for FreeBSD, OpenSolaris and Mac OS X that allows non-privileged users to create their own file systems without the need to write any kernel code. This is achieved by running the file system code in user space, while the FUSE module only provides a “bridge” to the actual kernel interfaces. FUSE was officially merged into the mainstream Linux kernel tree in kernel version 2.6.14.

You need to use SSHFS to access to a remote filesystem through SSH or even you can use Gmail account to store files.

Following instructions are tested on CentOS, Fedora Core and RHEL 4/5 only. But instructions should work with any other Linux distro without a problem.

Step # 1: Download and Install FUSE

Visit fuse home page and download latest source code tar ball. Use wget command to download fuse package:
# wget
Untar source code:
# tar -zxvf fuse-2.6.5.tar.gz
Compile and Install fuse:
# cd fuse-2.6.5
# ./configure
# make
# make install

Step # 2: Configure Fuse shared libraries loading

You need to configure dynamic linker run time bindings using ldconfig command so that sshfs command can load shared libraries such as
# vi /etc/
Append following path:
Run ldconfig:
# ldconfig

Step # 3: Install sshfs

Now fuse is loaded and ready to use. Now you need sshfs to access and mount file system using ssh. Visit sshfs home page and download latest source code tar ball. Use wget command to download fuse package:
# wget
Untar source code:
# tar -zxvf sshfs-fuse-1.7.tar.gz
Compile and Install fuse:
# cd sshfs-fuse-1.7
# ./configure
# make
# make install

Mounting your remote filesystem

Now you have working setup, all you need to do is mount a filesystem under Linux. First create a mount point:
# mkdir /mnt/remote
Now mount a remote server filesystem using sshfs command:
# sshfs [email protected]: /mnt/remote

  • sshfs : SSHFS is a command name
  • [email protected]: – vivek is ssh username and is my remote ssh server.
  • /mnt/remote : a local mount point

When promoted supply vivek (ssh user) password. Make sure you replace username and hostname as per your requirements.

Now you can access your filesystem securely using Internet or your LAN/WAN:
# cd /mnt/remote
# ls
# cp -a /ftpdata . &

To unmount file system just type:
# fusermount -u /mnt/remote
# umount /mnt/remote

Further readings:

Linux recover deleted files with lsof command – howto

Posted on in Categories Backup, Data recovery, Linux, RedHat/Fedora Linux, Suse Linux, Ubuntu Linux last updated November 17, 2006

Almost 2 years back I wrote about recovering deleted text file with grep command under UNIX or Linux.

Michael Stutz shows us how to recover deleted files using lsof command.

From the article:
There you are, happily playing around with an audio file you’ve spent all afternoon tweaking, and you’re thinking, “Wow, doesn’t it sound great? Lemme just move it over here.” At that point your subconscious chimes in, “Um, you meant mv, not rm, right?” Oops. I feel your pain — this happens to everyone. But there’s a straightforward method to recover your lost file, and since it works on every standard Linux system, everyone ought to know how to do it.

Briefly, a file as it appears somewhere on a Linux filesystem is actually just a link to an inode, which contains all of the file’s properties, such as permissions and ownership, as well as the addresses of the data blocks where the file’s content is stored on disk. When you rm a file, you’re removing the link that points to its inode, but not the inode itself; other processes (such as your audio player) might still have it open. It’s only after they’re through and all links are removed that an inode and the data blocks it pointed to are made available for writing.

This delay is your key to a quick and happy recovery: if a process still has the file open, the data’s there somewhere, even though according to the directory listing the file already appears to be gone.


However recovering files under Linux is still hard work for new admins. I highly recommend backing up files regularly and storing backup offsite.

True Tales of Data Disaster and Remarkable Feats of Data Recovery

Posted on in Categories Backup, Data recovery, News last updated November 16, 2006

BBC news has published top 10 data disasters. Hard drives kept in dirty socks and the dangers of oiling your PC feature in a top 10 list of data disasters.

Ontrack Data Recovery has unveiled its annual Top Ten list of remarkable data loss disasters in 2006. Taken from a global poll of Ontrack data recovery experts, this year’s list of data disasters is even more incredible when you consider that in every case cited, Ontrack successfully recovered the data.

It is recommended that you always follow these backup tips.

Use amanda to backup your Linux server – Howto

Posted on in Categories Backup, Data recovery, Howto, Tips last updated November 4, 2006

Amanda is one the best open source Linux backup software.

The Advanced Maryland Automatic Network Disk Archiver (AMANDA), is a backup system that allows the administrator to set up a single master backup server to back up multiple hosts over network to tape drives/changers or disks or optical media.

Novell Cool Solutions has published a small how to about Amanda. From the article:
Amanda is simple to use but tricky to configure. It is worth sticking with Amanda until you get a fully working system. Don’t be put off by the lack of a slick GUI interface or the need to configure the software via a console shell, once it is configured Amanda does its job very efficiently.

The configuration files for Amanda are stored in the /etc/amanda directory. Each configuration that you create goes into a separate directory within /etc/amanda. There is a sample configuration in the “example” directory that you can use as a jumping off point for configuring Amanda


* Background
* Installing Amanda
* Configuring Amanda
* Creating file and directories
* Preparing tapes for use with Amanda
* Checking the configuration
* Performing the backup
* Checking the backup
* Automating the backup
* Conclusions

Read more, Using Amanda to Backup Your Linux Server at

How Linux Live CD could save your life

Posted on in Categories Backup, Data recovery, Linux last updated October 26, 2006

Yet another reason to carry a Live Linux cd 🙂

From the article:
Everyone’s worst nightmare; the normal comforting hum of your computer is disturbed by clicking, pranging, banging…

It happens to everyone because it’s inevitable (hard drives are mechanical, as sure as a car will break down your hard drive will fail eventually). However, no matter how often you see it you never quite get used to it happening, the heartache of all the files you lose forever because you were “just about to back it up, honestly”. This is not a matter of explaining to you how you can best avoid data loss or how to protect against your hard drive dying, this is an article outlining how Ubuntu (or any other LiveCD available distro) could save your life