The GNU version of the tar archiving utility (and other old version of tar) can be use through network over ssh session. Do not use telnet command, it is insecure. You can use Unix/Linux pipes to create archives.
Let us see some examples of how to use the tar command over ssh securely to create archives on Linux or Unix-like system.

Syntax
The syntax is as follows to ssh into box and run the tar command:
ssh user@box tar czf - /dir1/ > /destination/file.tar.g
OR
ssh user@box 'cd /dir1/ && tar -cf - file | gzip -9' >file.tar.gz
The following command backups /wwwdata directory to dumpserver.nixcraft.in (IP 192.168.1.201) host over ssh session.
# tar zcvf - /wwwdata | ssh root@dumpserver.nixcraft.in "cat > /backup/wwwdata.tar.gz"OR# tar zcvf - /wwwdata | ssh root@192.168.1.201 "cat > /backup/wwwdata.tar.gz"
Sample outputs:
tar: Removing leading `/' from member names /wwwdata/ /wwwdata/n/nixcraft.in/ /wwwdata/c/cyberciti.biz/ .... .. ... Password:
The default first SCSI tape drive under Linux is /dev/st0. You can read more about tape drives naming convention used under Linux here. You can also use dd command for clarity purpose:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 "dd of=/backup/wwwdata.tar.gz"
It is also possible to dump backup to remote tape device:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 "cat > /dev/nst0"
OR you can use mt to rewind tape and then dump it using cat command:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 $(mt -f /dev/nst0 rewind; cat > /dev/nst0)$
You can restore tar backup over ssh session: # cd /If you wish to use above command in cron job or scripts then consider SSH keys to get rid of the passwords.
# ssh root@192.168.1.201 "cat /backup/wwwdata.tar.gz" | tar zxvf -
Some more examples:
$ tar cvjf - * | ssh vivek@nixcraft "(cd /dest/; tar xjf -)"
$ tar cvzf - mydir/ | ssh vivek@backupbox "cat > /backups/myfile.tgz"
$ tar cvzf - /var/www/html | ssh vivek@server1.cyberciti.biz "dd of=/backups/www.tar.gz"
$ ssh vivek@box2 "cat /backups/www.tar.gz" | tar xvzf -
$ tar cvjf - * | ssh root@home.nas02 "(cd /dest/; tar xjf - )"
Make sure you read the tar command/ssh command/bash command man page for more info:
$ man tar
$ man bash
$ man ssh
A note about SSHFS – a FUSE filesystem
You can use sshfs to mount a remote directory and run tar command:
mkdir /data/
sshfs vivek@server1.cyberciti.biz:/ /data/
tar -zcvf /data/file.tar.gz /home/vivek/



40 comment
Why use the ssh command twice, or is that a typo?
First one is with hostname and second one is with IP address.
ssh ssh root@192.168.1.201 “cat /backup/wwwdata.tar.gz” | tar zxvf –
why use the ssh twice here? (I believe this was the original question, too.
Daniel/Mike,
That was a typo. Thanks for heads up!
what is SQUID
The use of this and your examples seem rather untypical. Why pipe it through “ssh” if you’re just transfering a tar.gz to the other side. You could just create the tar.gz and scp it.
Also, the use of “cat” in your examples is completely unnecessary.
I came here hoping to find an example like this (i.e. transferring a directory recursively over ssh). So, for the next guy:
tar cvf – /data | ssh otherhost tar xvf –
the next guy thanks you very much
So how exactly would your tar up a 10GB partition with less than 1GB of space left? The original author’s solution works very nice, as does your solution. They are just used for two separate things.
tar cvf – /path/to/source/files | ssh otherhost “cd /path/to/destination/directory && tar xvf -“
Hi Vincent,
You may want to do this to get around limitations in older implementations of SSH that do not allow for large file transfers (larger than 2GB). I had recently run into this problem and the only workable solution was to tar over ssh to get around it.
Hi Vincent,
you could create a .tgz or whatever locally and then use scp. The problem with large amounts of data is that scp is awfully slow.
Cheers,
valor
rsync -avzH -e’essh’ /wwwdata root@192.168.1.201:/backup/
The whole point of this command is to help you when you have a filesystem full and need to tar files but don’t have enough space to store the tars. You can pipe the tar through ssh so that later you may also delete the files and place the tar into the original filesystem.
i dont know how to use to tar on network i was used 192.168.200.178 machine i use this /mydata folder how to transer using tar over network destination system is 192.168.200.200. any one help me.
The opposite side – which is the more common case, where you want to pull data from server, as opposed to making the server initiate connection and pushing data:
ssh gdr@server.net "tar jcf - /srv/gdr/gdr.geekhood.net/gdrwpl" > gdrwpl_backup.tar.bz2This might be useful if you are behind a firewall
Vincent:
The method of piping tar through SSH is faster than SCP not because SCP is slow (the transfer rate would theoretically be exactly the same), but because it saves a lot of time by parallelizing the tar.gz creation with the transfer. This is even more true if the source system only has one hard drive (or the only hard drive with enough free space to do the tar.gz is the same as the one you want data from).
If you have a few GB of loose files to copy into a .tar.gz on the remote side (say, for doing a backup), piping the output through ssh is faster because the source hard drive can just read continously the whole time and the destination can write at the same time. If you’re creating the .tar.gz on the same hard drive, you take a huge penalty for all the seeking it has to do; it as to read a bit, write it to the tar, read a bit more, write it to the tar, etc.
Even if you have a second hard drive (or a crapload of RAM), you’re still taking longer if you make the .tar.gz first because there’s creation + transfer time instead of just transfer time.
Sorry for being dumb but… so what is exactly the most efficient command to get local data to the remote server?
Hi,
is there a way to write a shell script that can automatically write data to tape every end of day?
or using netcat
$ tar czvf – /var/spool | nc -l 12345
$ nc host 12345 | tar xzvf –
it’s not secure, but it doesn’t require much
Hi,
thank you for your script snippets, one of these is just backing up some giga bytes across the network. But I notices a typo, a unnecessary “ssh” behind some of the pipe symbols. For example:
# tar cvzf - /wwwdata | ssh ssh root@192.168.1.201 "cat > /dev/nst0"Here’s one that worked for me recently:
I had to copy all the files from server A to a directory in server B (in order to have full replica of A), using man-in-the-middle server (because that IP was the only one allowed to connect).
The trouble was that I only had sudo rights on the first server and there were absolutely all ports closed (both ways) except incoming 22 for my ip and incoming 80 and 443 for serving web. No way to ssh out of that box (fw blocked outgoing syn packets)
First I had to “initialize” sudo so that I wouldn’t be asked a password which would later be asked within the pipe so I can’t provide it then (you recognize it by the infinite delay in the beginning while files are not appearing to the other side).
ssh -Ct serverA "sudo hostnamePassword:
-C uses compression,
-t forces assigning a terminal (RHEL 5.1 by default requires terminal)
I guess this can be achieved also by just sshing in and issuing the same command there. Hostname is just a random command to get sudo to ask for password (which it remembers for the next 15 minutes).
Now for the fun part:
ssh -Ct "stty -onlcr; sudo tar -cpf - -X /tmp/exclusion.list / 2> /dev/null" | ssh serverB "cd /tmp; tar cvpf -"stty -onlcr fixes a problem that arises with using forced terminal: for every CR (0x13) an extra LF character will be injected (0x13) for proper displaying on terminal. Only we’re actually not using a terminal but passing the bitstream through the ssh tunnel to tar.
-p preserves files’ permissions
-X specifies an exclusion file (directories I don’t want to be copied like /dev, /proc and /sys)
/ is what I want to be tarred :)
2> /dev/null sends tar commentary to the darkest of places. Without it you’ll get tar’s own chatter within the data stream.
Hope this will be useful to someone (like myself, later on)
Typo fix:
1)
ssh -Ct serverA "sudo hostname"2) …for every CR (0×13) an extra LF character will be injected (0×10) for proper displaying on terminal.
Typo fix2:
left the server out:
ssh -Ct serverA “stty -onlcr; sudo tar -cpf – -X /tmp/exclusion.list / 2> /dev/null” | ssh serverB “cd /tmp; tar cvpf -“
Another way would be using tar in both ends, as the example below:
tar czvf - /somedir | ssh user@host "cat - | tar xzfv - -C /outputdirAnother way would be using tar in both ends, as the example below:
tar czvf - /somedir | ssh user@host "cat - | tar xzfv - -C /outputdir"Anderson, the use of “cat†in your example is completely unnecessary.
tar czvf - /somedir | ssh user@host "tar xvzf - -C /outputdirI recently needed to copy entire directory structure from one machine to another, preserving symlinks, owners and dates. I’ve done this tens of times before with tar and ssh but this time it didn’t work.
Although I didn’t use the -h option, tar nevertheless followed symlinks and not recreated them on other side. Distro was Ubuntu 8.04. When I tried it with a small set of files, it worked, though, but I needed the entire tree. I never figured it out why it acted like that.
I was finally able to solve my problem by using rsync and after inital setup it worked very well. So for anyone stumbling over the same rock, here’s some examples getting it done with rsync:
http://www.commandlinefu.com/commands/matching/rsync/cnN5bmM=/sort-by-votes
Vivek, There are at least 3 typos of duplicated ssh (ssh ssh).
And why not tar jxf user@remote.com:some_archive.tar.bz2 ? It will do the ssh for you, no need to do the ssh yourself (I don’t remember if this was in Debian Lenny or Ubuntu Lucid… maybe older versions/other versions too).
Is there some simple method to copy file through some kind of “ssh chain” ?
Assuming that I’m at “homepc” , can connect via SSH to “remote1” , and from “remote1” I can only connect SSH to “remote2” .
Which is the “one-liner” to copy a file from “remote2” to “homepc” ?
Let’s say it’s “remote2:/repository/somefile.war” (I googled around but not found easy method)
Thanks a lot ! Really helpfull.
In my case I wanted to untar. The solution is :
ssh serveur “cat file.tar” | untar -xvf –
In my experience, “rsync over ssh” is much faster than “tar | ssh”. Both are faster than scp, though. The only advantage of “tar | ssh”, IMHO, is not needing to have rsync in the remote host…
GNU tar:
tar jcvf user@host:/somedir/file.tar.bz2 –rsh-command=/usr/bin/ssh /sourcedir/
Easier:
tar -czf – bla | ssh oherhost “(cd /somewher/to/restore && tar -xzf -)”
I want to do exactly this to a Windows machine running ssh
I’m trying something like
but there is no “cat” in windows, and the similar commands (echo, type, more) doesn’t seem to take input from stdin. Any ideas?
i think it will be better to use -p while making backup
@Dan: http://www.giyf had told you this link:
http://gnuwin32.sourceforge.net/packages/coreutils.htm
work perfect with BitVise SSH Server on Win over here.
HTH!
@mod:
pls add this line to last post and delete this:
BTW: To add the install dir to the path makes things easier on target
If, for one reason or another, you call ssh with the `-t` param (as mentioned by Henno) or have set `RequestTTY yes` in your ssh_config, tar will give strange errors like
`tar: Skipping to next header
tar: Exiting with failure status due to previous errors`
or
`tar: A lone zero block at 21625
tar: Exiting with failure status due to previous errors`
These will go away by adding ssh parameter `-T Disable pseudo-tty allocation.`, or if you need `-t` by prepending `stty -onlcr; ` to the remote command as workaround (thx Henno!).
Hi,
i am taking backup using this command
tar cvzf – /wwwdata | ssh root@192.168.1.201 “cat > /dev/nst0”
when i am extracting file it is got corrupted
issue is how can i extract file from tape /dev/nst0 ?