How To Use tar Command Through Network Over SSH Session

last updated in Categories , , , , , , , , ,

How do I use tar command over secure ssh session running on Linux or Unix-like system? How do I extract tar archive via SSH based network connection?

The GNU version of the tar archiving utility (and other old versions of tar) can be used through the network over ssh session. Do not use telnet/nc command due to insecure conection. You can use Unix/Linux pipes to create archives. Let us see some examples of how to use the tar command over ssh securely to create archives on Linux, BSD/macOS or Unix-like system.
How To Use tar Command Through Network Over SSH Session


Syntax for using tar command over ssh

The syntax is as follows to ssh into box and run the tar command:
ssh user@box tar czf - /dir1/ > /destination/file.tar.gz
ssh user@box 'cd /dir1/ && tar -cf - file | gzip -9' >file.tar.gz
The following command backups /wwwdata directory to (IP host over ssh session:
# tar zcvf - /wwwdata | ssh "cat > /backup/wwwdata.tar.gz"
# tar zcvf - /wwwdata | ssh vivek@ "cat > /backup/wwwdata.tar.gz"
Sample outputs:

tar: Removing leading `/' from member names

In this example archive /data2/ with gpg:
$ tar zcf - /data2/ | gpg -e | ssh vivek@nas03 'cat - > data2-dd-mm-yyyy.tar.gz.gpg'
Please note that you may get an error that read as follows with ssh command when using with sudo or any other command that needs a pseudo-terminal allocation:
sudo: sorry, you must have a tty to run sudo
To avoid this problem pass the -t option to the ssh command:
# tar zcvf - /wwwdata | ssh -t vivek@ "sudo cat > /backup/wwwdata.tar.gz"

Use of tar command over ssh sessions

Copying from the remote machine ( to local system is as follows:
$ cd /path/local/dir/
$ ssh 'tar zcf - /some/dir' | tar zxf -

Linux system hard drive backup/mirror using tar and ssh

Let us copy the entire hard disk drive named /dev/sdvf from local machine to the remote AWS EC2 cloud backup server:
# dd if=/dev/sdvf | ssh backupimg@vpc-aws-mumbai-backup-001 'dd of=prod-disk-hostname-sdvf-dd-mm-yyyy.img'
To restore a local drive from the image on the server, reverse the command:
One can restore a local hard disk drive from the image stored on the remote AWS EC2 cloud backup server as follows:
# ssh backupimg@vpc-aws-mumbai-backup-001 'dd if=prod-disk-hostname-sdvf-dd-mm-yyyy.img' | dd of=/dev/sdvf

Moving data to a new Linux system

The problem with scp and other command copying the directory structure is that Symbolic links, special devices, sockets, named pipes, and other stuff not copied. Hence, we use tar over ssh. For example, copy all data from nuc-box. Open the terminal on x230 laptop and run the ssh command along with tar command:
$ ssh vivek@nuc-box 'tar czf - /home/vivek' | tar xvzf - -C /home/vivek

Use tar command through network over SSH session for tape device

The default first SCSI tape drive under Linux is /dev/st0. You can read more about tape drives naming convention used under Linux here. You can also use dd command for clarity purpose:
# tar cvzf - /wwwdata | ssh root@ "dd of=/backup/wwwdata.tar.gz"
It is also possible to dump backup to remote tape device:
# tar cvzf - /wwwdata | ssh root@ "cat > /dev/nst0"
One can can use mt command to rewind tape and then dump it using cat command:
# tar cvzf - /wwwdata | ssh root@ $(mt -f /dev/nst0 rewind; cat > /dev/nst0)$

How to extract tar over ssh

The syntax is pretty simple:
$ cat my-data.tar.gz | ssh "tar zxvf -"
cat my-data.tar.gz | ssh "cd /path/to/dest/; tar zxvf -"

In this example, restore tar backup over ssh session from the remote machine to local dir:
# cd /
# ssh root@ "cat /backup/wwwdata.tar.gz" | tar zxvf -

If you wish to use above command in cron jobs or scripts then consider SSH keys to get rid of the passwords.

How to tar over SSH with progress bar

The pv command allows you to see the progress of data through a pipeline. So the syntax is:
$ cd /dir/to/backup/
$ tar zcf - . | pv | ssh "cat > /backups/box42/backup-dd-mm-yyyy.tgz"
$ cd /tmp/data/
$ tar zcf - . | \
pv | \
ssh vivek@centos7 "cat > /tmp/data.tgz"<

Tar over ssh

Some more examples of tar over ssh:

$ tar cvjf - * | ssh vivek@nixcraft "(cd /dest/; tar xjf -)"
$ tar cvzf - mydir/ | ssh vivek@backupbox "cat > /backups/myfile.tgz"
$ tar cvzf - /var/www/html | ssh "dd of=/backups/www.tar.gz"
$ ssh vivek@box2 "cat /backups/www.tar.gz" | tar xvzf -
$ tar cvjf - * | ssh root@home.nas02 "(cd /dest/; tar xjf - )"

Make sure you read the tar command/ssh command/bash command man page for more info:
$ man tar
$ man bash
$ man ssh

A note about SSHFS - a FUSE filesystem

You can use sshfs to mount a remote directory and run tar command:
mkdir /data/
sshfs /data/
tar -zcvf /data/file.tar.gz /home/vivek/


You learned how to use the tar command over ssh sessions to transfer archives, files, and images securely. See GNU/tar home page here for more info.


Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Start the discussion at

Historical Comment Archive

40 comment

  1. The use of this and your examples seem rather untypical. Why pipe it through “ssh” if you’re just transfering a tar.gz to the other side. You could just create the tar.gz and scp it.

    Also, the use of “cat” in your examples is completely unnecessary.

    I came here hoping to find an example like this (i.e. transferring a directory recursively over ssh). So, for the next guy:

    tar cvf – /data | ssh otherhost tar xvf –

    1. So how exactly would your tar up a 10GB partition with less than 1GB of space left? The original author’s solution works very nice, as does your solution. They are just used for two separate things.

    2. tar cvf – /path/to/source/files | ssh otherhost “cd /path/to/destination/directory && tar xvf -“

  2. Hi Vincent,

    You may want to do this to get around limitations in older implementations of SSH that do not allow for large file transfers (larger than 2GB). I had recently run into this problem and the only workable solution was to tar over ssh to get around it.

  3. Hi Vincent,

    you could create a .tgz or whatever locally and then use scp. The problem with large amounts of data is that scp is awfully slow.



  4. The whole point of this command is to help you when you have a filesystem full and need to tar files but don’t have enough space to store the tars. You can pipe the tar through ssh so that later you may also delete the files and place the tar into the original filesystem.

  5. i dont know how to use to tar on network i was used machine i use this /mydata folder how to transer using tar over network destination system is any one help me.

  6. The opposite side – which is the more common case, where you want to pull data from server, as opposed to making the server initiate connection and pushing data:

    ssh "tar jcf - /srv/gdr/" > gdrwpl_backup.tar.bz2

    This might be useful if you are behind a firewall

  7. Vincent:
    The method of piping tar through SSH is faster than SCP not because SCP is slow (the transfer rate would theoretically be exactly the same), but because it saves a lot of time by parallelizing the tar.gz creation with the transfer. This is even more true if the source system only has one hard drive (or the only hard drive with enough free space to do the tar.gz is the same as the one you want data from).

    If you have a few GB of loose files to copy into a .tar.gz on the remote side (say, for doing a backup), piping the output through ssh is faster because the source hard drive can just read continously the whole time and the destination can write at the same time. If you’re creating the .tar.gz on the same hard drive, you take a huge penalty for all the seeking it has to do; it as to read a bit, write it to the tar, read a bit more, write it to the tar, etc.

    Even if you have a second hard drive (or a crapload of RAM), you’re still taking longer if you make the .tar.gz first because there’s creation + transfer time instead of just transfer time.

  8. Sorry for being dumb but… so what is exactly the most efficient command to get local data to the remote server?

  9. or using netcat

    $ tar czvf – /var/spool | nc -l 12345
    $ nc host 12345 | tar xzvf –

    it’s not secure, but it doesn’t require much

  10. Hi,
    thank you for your script snippets, one of these is just backing up some giga bytes across the network. But I notices a typo, a unnecessary “ssh” behind some of the pipe symbols. For example:
    # tar cvzf - /wwwdata | ssh ssh root@ "cat > /dev/nst0"

  11. Here’s one that worked for me recently:

    I had to copy all the files from server A to a directory in server B (in order to have full replica of A), using man-in-the-middle server (because that IP was the only one allowed to connect).

    The trouble was that I only had sudo rights on the first server and there were absolutely all ports closed (both ways) except incoming 22 for my ip and incoming 80 and 443 for serving web. No way to ssh out of that box (fw blocked outgoing syn packets)

    First I had to “initialize” sudo so that I wouldn’t be asked a password which would later be asked within the pipe so I can’t provide it then (you recognize it by the infinite delay in the beginning while files are not appearing to the other side).

    ssh -Ct serverA "sudo hostname

    • -C uses compression,
    • -t forces assigning a terminal (RHEL 5.1 by default requires terminal)

    I guess this can be achieved also by just sshing in and issuing the same command there. Hostname is just a random command to get sudo to ask for password (which it remembers for the next 15 minutes).

    Now for the fun part:

    ssh -Ct "stty -onlcr; sudo tar -cpf - -X /tmp/exclusion.list / 2> /dev/null" | ssh serverB "cd /tmp; tar cvpf -"

    stty -onlcr fixes a problem that arises with using forced terminal: for every CR (0x13) an extra LF character will be injected (0x13) for proper displaying on terminal. Only we’re actually not using a terminal but passing the bitstream through the ssh tunnel to tar.

    • -p preserves files’ permissions
    • -X specifies an exclusion file (directories I don’t want to be copied like /dev, /proc and /sys)
    • / is what I want to be tarred :)
    • 2> /dev/null sends tar commentary to the darkest of places. Without it you’ll get tar’s own chatter within the data stream.
    • Hope this will be useful to someone (like myself, later on)

  12. Typo fix:
    1) ssh -Ct serverA "sudo hostname"

    2) …for every CR (0×13) an extra LF character will be injected (0×10) for proper displaying on terminal.

  13. Typo fix2:
    left the server out:
    ssh -Ct serverA “stty -onlcr; sudo tar -cpf – -X /tmp/exclusion.list / 2> /dev/null” | ssh serverB “cd /tmp; tar cvpf -“

  14. Another way would be using tar in both ends, as the example below:

    tar czvf - /somedir | ssh user@host "cat - | tar xzfv - -C /outputdir

  15. Another way would be using tar in both ends, as the example below:

    tar czvf - /somedir | ssh user@host "cat - | tar xzfv - -C /outputdir"

  16. Anderson, the use of “cat” in your example is completely unnecessary.

    tar czvf - /somedir | ssh user@host "tar xvzf - -C /outputdir

  17. I recently needed to copy entire directory structure from one machine to another, preserving symlinks, owners and dates. I’ve done this tens of times before with tar and ssh but this time it didn’t work.

    Although I didn’t use the -h option, tar nevertheless followed symlinks and not recreated them on other side. Distro was Ubuntu 8.04. When I tried it with a small set of files, it worked, though, but I needed the entire tree. I never figured it out why it acted like that.

    I was finally able to solve my problem by using rsync and after inital setup it worked very well. So for anyone stumbling over the same rock, here’s some examples getting it done with rsync:

  18. And why not tar jxf ? It will do the ssh for you, no need to do the ssh yourself (I don’t remember if this was in Debian Lenny or Ubuntu Lucid… maybe older versions/other versions too).

  19. Is there some simple method to copy file through some kind of “ssh chain” ?
    Assuming that I’m at “homepc” , can connect via SSH to “remote1” , and from “remote1” I can only connect SSH to “remote2” .

    Which is the “one-liner” to copy a file from “remote2” to “homepc” ?
    Let’s say it’s “remote2:/repository/somefile.war” (I googled around but not found easy method)

  20. Thanks a lot ! Really helpfull.

    In my case I wanted to untar. The solution is :

    ssh serveur “cat file.tar” | untar -xvf –

  21. In my experience, “rsync over ssh” is much faster than “tar | ssh”. Both are faster than scp, though. The only advantage of “tar | ssh”, IMHO, is not needing to have rsync in the remote host…

  22. I want to do exactly this to a Windows machine running ssh

    I’m trying something like

    tar zcvf - /somedir | ssh winuser@windowsbox.local " > /backup/wwwdata.tar.gz"

    but there is no “cat” in windows, and the similar commands (echo, type, more) doesn’t seem to take input from stdin. Any ideas?

  23. @mod:
    pls add this line to last post and delete this:

    BTW: To add the install dir to the path makes things easier on target

  24. If, for one reason or another, you call ssh with the `-t` param (as mentioned by Henno) or have set `RequestTTY yes` in your ssh_config, tar will give strange errors like
    `tar: Skipping to next header
    tar: Exiting with failure status due to previous errors`
    `tar: A lone zero block at 21625
    tar: Exiting with failure status due to previous errors`
    These will go away by adding ssh parameter `-T Disable pseudo-tty allocation.`, or if you need `-t` by prepending `stty -onlcr; ` to the remote command as workaround (thx Henno!).

  25. Hi,
    i am taking backup using this command
    tar cvzf – /wwwdata | ssh root@ “cat > /dev/nst0”
    when i am extracting file it is got corrupted
    issue is how can i extract file from tape /dev/nst0 ?

    Still, have a question? Get help on our forum!