Dual network interface card to optimize Linux server backup process

The majority of the time small and medium size business use the single dedicated Linux/*BSD box for hosing web site, database server, mail server. These servers are so busy round the clock (yes we do have lots of such client, they have dual XEON/AMD or P4 with 4-8 GIG RAM). Since backup is such critical procedure, we have an automated snapshot (hourly, nightly, and full monthly backup facility) procedure for all dedicated UNIX/ Linux/Windows boxes.

Snapshot Backups provide a convenient, automatic way to save copies of data/website/ftp/mysql data/site/files without using valuable disk space. Backup software (or Linux scripts) stores a copy, or takes a “snapshot,” of customers dedicated box every 2 hours, nightly, weekly etc. These snapshots are saved and dated for customer by our software and can be restore directly from clients control panel, in the event that any of files are accidentally deleted or changed.

Although this facility is ultra cool, it has its own disadvantage too on clients’ dedicated Linux box. On any busy server things started to get worst because of hourly hot snapshot backup. Our customer started to report us that while backup is in progress ftp/www site becomes slow. We quickly realize that single NIC is the problem, so we have upgraded all old servers to dual NICs. Therefore, backup data is piped through a second NIC, isolating the process from frontend traffic.

Linux  eth0  --> Public interface for ftp/http/mysql traffic
Box    eth1  --> Private interface for backup

eth1 IP(s) only accessible in our data center, all outside access to eth1 IP is blocked at enterprise IDC firewall. This is done for security reason.

We have products from NetApp for central storage and snapshot facility (in case if you are wondering what we are using for central storage). You can find information about NetApp here.

Result was neat and now no more complaints from customer. We use same solution for shared hosting customer too. However if your IDC is small then you can use any (netapp products are expensive) other Linux/UNIX box and couple of ftp script could do the same backup trick and make sure you use second NIC to pump backup data. Here is sample diagram that will help you to grasp the concept

  1. All eth0 connects to Internet
  2. All eth1 connects to Private switch inside IDC
  3. Linux based backup server should not be accessible outside your network
  4. You need to create perl/shell script to automate backup procedure

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 4 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersdf duf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Modern utilitiesbat exa
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg glances gtop jobs killall kill pidof pstree pwdx time vtop
Searchingag grep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
4 comments… add one
  • Bill Nov 23, 2005 @ 20:17

    Interesting reading never had thought about dual NIC for backup. This blog really gives some good information thanks.

  • Mike Jul 21, 2007 @ 23:04


    Yes, I realise we are now in 2007 and the post was in 2005… thought I’d pose the following:

    Is there any reason why you don’t use LACP to team/bond the NIC’s together (assuming your switches and OS support this of course) as you would have combined throughput and an extra layer of redundancy? I realise that your desire is to isolate the traffic, but at least with LACP, if one link fails, you don’t lose any connectivity. Did you try this already and deem that it was not the most appropriate solution?

  • Mike2 Aug 22, 2007 @ 14:41

    I too realize its the year 2007. Since I stumbled upon this article when doing a bit of reading on bonding and multi-homing, I thought I’d reply to your question.

    Security I would think would be a valid reason not to bond for the purpose of general traffic AND backup.

    For one the isolated backup network would have its relatively full network capacity for backup purposes, when the primary interface is dedicated of general network traffic. So unless the bonded network is quite during backup, the higher transfer rate might not be as realized as you’d hope.

    Secondly by separating them you are protecting your backup server from possible unwanted network/internet access. I mean if you’re so concerned about recovering a server from a disastrous case, say like a hardware failure or hack, you might want to ensure the backup server itself is better protected than the server its going to have to restore.

    Personally I was looking for some helpful info on isolating different kinds of traffic to separate networks. Like in a clustered environment, where all data replication is isolated to a separate network interface between the nodes, so general load would never affect the cluster’s ability to replicate under stress. (how the stress affects other parts of the system is also a concern, but I’m just focusing on one thing at a time)

  • 🐧 nixCraft Aug 22, 2007 @ 15:30

    @Mike1 and Mike2,

    Yes original concern was security and traffic isolation. So when customers/clients box went down due to any reason, we will restore it back within 2 hrs SLA time followed by data restore from netapp snapshots. We still follow this method for a single server and it is cost effective automated solution.
    >I realise that your desire is to isolate the traffic, but at least with LACP, if one link fails, you don’t lose any connectivity. Did you try this already and deem that it was not the most appropriate solution?
    1) Our setup is old we need to make lots of backend changes
    2) Cost is another factor but it is good idea.

    We are in process of building a new IDC and it will have all sort of goodies. There will be full vlan, a private and public network. May be someday I will be able to write about new architecture .. heh

    >Personally I was looking for some helpful info on isolating different kinds of traffic to separate networks.

    You need to setup a load balancer (LA0). All internet traffic goes to LA0. Behind LA0 you can put 100 of nodes. LA0 must be industry grade hardware with inbuilt failover for LA1. Don’t worry about nodes, just put a single lan card and use normal hardware. LA0 will distribute http/https traffic according to selected algorithm to all nodes. If node5 goes down traffic will be redirected to next node. You need to use central storage or special file distributed file system or NBD. It is possible that central storage might get overloaded so you may need load some sort of loadblancing or use other tricks.

    As you see it all depends upon your application and many other factors. Since I’ve no idea about your application and architecture, above is general suggestion.


Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum