Debian / Ubuntu Linux: Configure Network Bonding [ Teaming / Aggregating NIC ]

by on September 4, 2011 · 19 comments· LAST UPDATED September 6, 2011

in

NIC teaming is nothing but combining or aggregating multiple network connections in parallel. This is done to increase throughput, and to provide redundancy in case one of the links fails or Ethernet card fails. The Linux kernel comes with the bounding driver for aggregating multiple network interfaces into a single logical interface called bond0. In this tutorial, I will explain how to setup bonding under Debian Linux server to aggregate multiple Ethernet devices into a single link, to get higher data rates and link failover.

The instructions were tested using the following setup:

  • 2 x PCI-e Gig NIC with jumbo frames.
  • RAID 6 w/ 5 enterprise grade 15k SAS hard disks.
  • Debian Linux 6.0.2 amd64

Please note that the following instructions should also work on Ubuntu Linux server.

Required Software

You need to install the following tool:

  • ifenslave command: It is used to attach and detach slave network devices to a bonding device. A bonding device will act like a normal Ethernet network device to the kernel, but will send out the packets via the slave devices using a simple round-robin scheduler. This allows for simple load-balancing, identical to "channel bonding" or "trunking" techniques used in network switches.

Our Sample Setup

Internet
 |                  202.54.1.1 (eth0)
ISP Router/Firewall 192.168.1.254 (eth1)
   \
     \                             +------ Server 1 (Debian file server w/ eth0 & eth1) 192.168.1.10
      +------------------+         |
      | Gigabit Ethernet |---------+------ Server 2 (MySQL) 192.168.1.11
      | with Jumbo Frame |         |
      +------------------+         +------ Server 3 (Apache) 192.168.1.12
                                   |
                                   +-----  Server 4 (Proxy/SMTP/DHCP etc) 192.168.1.13
                                   |
                                   +-----  Desktop PCs / Other network devices (etc)

Install ifenslave

Use the apt-get command to install ifenslave, enter:
# apt-get install ifenslave-2.6
Sample outputs:

Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'ifenslave-2.6' instead of 'ifenslave'
The following NEW packages will be installed:
  ifenslave-2.6
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 18.4 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://mirror.anl.gov/debian/ squeeze/main ifenslave-2.6 amd64 1.1.0-17 [18.4 kB]
Fetched 18.4 kB in 1s (10.9 kB/s)
Selecting previously deselected package ifenslave-2.6.
(Reading database ... 24191 files and directories currently installed.)
Unpacking ifenslave-2.6 (from .../ifenslave-2.6_1.1.0-17_amd64.deb) ...
Processing triggers for man-db ...
Setting up ifenslave-2.6 (1.1.0-17) ...
update-alternatives: using /sbin/ifenslave-2.6 to provide /sbin/ifenslave (ifenslave) in auto mode.

Linux bounding Driver Configuration

Create a file called /etc/modprobe.d/bonding.conf, enter:
# vi /etc/modprobe.d/bonding.conf
Append the following

alias bond0 bonding
  options bonding mode=0 arp_interval=100 arp_ip_target=192.168.1.254, 192.168.1.12

Save and close the file. This configuration file is used by the Linux kernel driver called bounding. The options are important here:

  1. mode=0 : Set the bonding policies to balance-rr (round robin). This is default. This mode provides load balancing and fault tolerance.
  2. arp_interval=100 : Set the ARP link monitoring frequency to 100 milliseconds. Without option you will get various warning when start bond0 via /etc/network/interfaces.
  3. arp_ip_target=192.168.1.254, 192.168.1.12 : Use the 192.168.1.254 (router ip) and 192.168.1.2 IP addresses to use as ARP monitoring peers when arp_interval is > 0. This is used determine the health of the link to the targets. Multiple IP addresses must be separated by a comma. At least one IP address must be given (usually I set it to router IP) for ARP monitoring to function. The maximum number of targets that can be specified is 16.

How Do I Load the Driver?

Type the following command
# modprobe -v bonding mode=0 arp_interval=100 arp_ip_target=192.168.1.254, 192.168.1.12
# tail -f /var/log/messages
# ifconfig bond0

Interface Bonding (Teaming) Configuration

First, stop eth0 and eth1 (do not type this over an ssh session), enter:
# /etc/init.d/networking stop
You need to modify /etc/network/interfaces file, enter:
# cp /etc/network/interfaces /etc/network/interfaces.bak
# vi /etc/network/interfaces

Remove eth0 and eth1 static IP configuration and update the file as follows:

 
############ WARNING ####################
# You do not need an "iface eth0" nor an "iface eth1" stanza.
# Setup IP address / netmask / gateway as per your requirements.
#######################################
auto lo
iface lo inet loopback
 
# The primary network interface
auto bond0
iface bond0 inet static
    address 192.168.1.10
    netmask 255.255.255.0
    network 192.168.1.0
    gateway 192.168.1.254
    slaves eth0 eth1
    # jumbo frame support
    mtu 9000
    # Load balancing and fault tolerance
    bond-mode balance-rr
    bond-miimon 100
    bond-downdelay 200
    bond-updelay 200
    dns-nameservers 192.168.1.254
    dns-search nixcraft.net.in
 

Save and close the file. Where,

  • address 192.168.1.10 : Dotted quad ip address for bond0.
  • netmask 255.255.255.0 : Dotted quad netnask for bond0.
  • network 192.168.1.0 : Dotted quad network address for bond0.
  • gateway 192.168.1.254 : Default gateway for bond0.
  • slaves eth0 eth1 : Setup a bonding device and enslave two real Ethernet devices (eth0 and eth1) to it.
  • mtu 9000 : Set MTU size to 9000. See Linux JumboFrames configuration for more information.
  • bond-mode balance-rr : Set bounding mode profiles to "Load balancing and fault tolerance". See below for more information.
  • bond-miimon 100 : Set the MII link monitoring frequency to 100 milliseconds. This determines how often the link state of each slave is inspected for link failures.
  • bond-downdelay 200 : Set the time, t0 200 milliseconds, to wait before disabling a slave after a link failure has been detected. This option is only valid for the bond-miimon.
  • bond-updelay 200 : Set the time, to 200 milliseconds, to wait before enabling a slave after a link recovery has been detected. This option is only valid for the bond-miimon.
  • dns-nameservers 192.168.1.254 : Use 192.168.1.254 as dns server.
  • dns-search nixcraft.net.in : Use nixcraft.net.in as default host-name lookup (optional).

A Note About Various Bonding Policies

In the above example bounding policy (mode) is set to 0 or balance-rr. Other possible values are as follows:

The Linux bonding driver aggregating policies
Bonding policies (mode)Description
balance-rr or 0Round-robin policy to transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
active-backup or 1Active-backup policy. Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. This mode provides fault tolerance.
balance-xor or 2Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count]. This mode provides load balancing and fault tolerance.
broadcast or 3Transmits everything on all slave interfaces. This mode provides fault tolerance.
802.3ad or 4Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Most network switches will require some type of configuration to enable 802.3ad mode.
balance-tlb or 5Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
balance-alb or 6Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation.

[ Source: See Documentation/networking/bonding.txt for more information. ]

Start bond0 Interface

Now, all configuration files have been modified, and networking service must be started or restarted, enter:
# /etc/init.d/networking start
OR
# /etc/init.d/networking stop && /etc/init.d/networking start

Verify New Settings

Type the following commands:
# /sbin/ifconfig
Sample outputs:

bond0     Link encap:Ethernet  HWaddr 00:xx:yy:zz:tt:31
          inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::208:9bff:fec4:3031/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
          RX packets:2414 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1559 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:206515 (201.6 KiB)  TX bytes:480259 (469.0 KiB)
eth0      Link encap:Ethernet  HWaddr 00:xx:yy:zz:tt:31
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:1214 errors:0 dropped:0 overruns:0 frame:0
          TX packets:782 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:103318 (100.8 KiB)  TX bytes:251419 (245.5 KiB)
          Memory:fe9e0000-fea00000
eth1      Link encap:Ethernet  HWaddr 00:xx:yy:zz:tt:31
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:1200 errors:0 dropped:0 overruns:0 frame:0
          TX packets:777 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:103197 (100.7 KiB)  TX bytes:228840 (223.4 KiB)
          Memory:feae0000-feb00000
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:560 (560.0 B)  TX bytes:560 (560.0 B)

How Do I Verify Current Link Status?

Use the cat command command to see current status of bounding driver and nic links:
# cat /proc/net/bonding/bond0
Sample outputs:

Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:xx:yy:zz:tt:31
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:xx:yy:zz:tt:30

Example: Link Failure

The contents of /proc/net/bonding/bond0 after the link failure:
# cat /proc/net/bonding/bond0
Sample outputs:

Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:xx:yy:zz:tt:31
Slave Interface: eth1
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:xx:yy:zz:tt:30

You will also see the following information in your /var/log/messages file:

Sep  5 04:16:15 nas01 kernel: [ 6271.468218] e1000e: eth1 NIC Link is Down
Sep  5 04:16:15 nas01 kernel: [ 6271.548027] bonding: bond0: link status down for interface eth1, disabling it in 200 ms.
Sep  5 04:16:15 nas01 kernel: [ 6271.748018] bonding: bond0: link status definitely down for interface eth1, disabling it

However, your nas01 server should work without any problem as eth0 link is still up and running. Next, replace the faulty network card, connect the cable, and you will see the following message in your /var/log/messages file:

Sep  5 04:20:21 nas01 kernel: [ 6517.492974] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Sep  5 04:20:21 nas01 kernel: [ 6517.548029] bonding: bond0: link status up for interface eth1, enabling it in 200 ms.
Sep  5 04:20:21 nas01 kernel: [ 6517.748016] bonding: bond0: link status definitely up for interface eth1.
TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!
This entry is 2 of 2 in the Linux NIC Interface Bonding (aggregate multiple links) Tutorial series. Keep reading the rest of the series:
  1. Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface
  2. Debian / Ubuntu Linux Configure Bonding [ Teaming / Aggregating NIC ]

{ 19 comments… read them below or add one }

1 Armin September 5, 2011 at 7:10 am

In the past balance-rr was not the right way for us. We get only transmission rates of max 160 Mbyte on 4 gbit network cards. In a local network with ip routing switch the best way is to use balance-xor with addional option xmit_hash_policy=layer2+3. In this environment you must configure the switch correctly to have the full power for receiving. There must be a transmission policy for the switch for traffic that you receive from switch to your client also.

Reply

2 Cicuta September 5, 2011 at 3:15 pm

Excellent article!

Reply

3 cyclop September 5, 2011 at 11:43 pm

Hmmm…very excelent….thank you vivek..

Reply

4 venkat September 6, 2011 at 12:10 pm

Thanx Vivek,

Great for work for linux admins…

Venkat

Reply

5 Sumit September 7, 2011 at 4:43 am

Thanks for the superb article. Keep it up.

Reply

6 Wizard November 24, 2011 at 6:17 pm

Hi. Thanks, great article.
But I’m facing a problem after reboot the system. The bonding interface cannot start… telling ‘cannot add bond bond0 already exists’
You know how to fix this?

Reply

7 Jair December 2, 2011 at 11:48 pm

How to configure two NICs gigabit to operate as one 2Gigabit speed NIC?
Have I some fault tolerance in this mode?

Reply

8 Kosbir March 18, 2012 at 6:12 pm

Very helpful article. Can bonding be applied to multiple 3G/UMTS connections? Or is it only applicable to Ethernet-based connections?

Reply

9 Saman July 16, 2012 at 12:33 pm

I need to enable Ethernet Bonding on three systems, connected together via switch. What I tried ended up with failure, slaves cannot be detected n added to bond0.

Here’s what I did:

created file /etc/modprobe.d/bonding.conf and added the following to it:

alias bond0 bonding
options bonding mode=0 miimon=100 updelay=200 downdelay=200

auto bond0
iface bond0 inet static
address 1.1.1.11
netmask 255.255.255.0
gateway 1.1.1.1
slaves eth0 eth1

and finally loading bonding module into the kernel and ifconfig.
#modprobe -v bonding

if anyone has done this before, I would highly appreciate if you could tell me the exact procedure.

Your

Reply

10 Jack Wade August 22, 2012 at 3:23 pm

Found a slight typo in one of your headings: “Linux bounding Driver Configuration”

s/bounding/bonding/

Reply

11 Giuseppe Sacco October 26, 2012 at 2:10 pm

It seems you are using /etc/init.d/networkign for setting up your network, but this is not available on debian 6.0.2. So probably you are working a previous Debian installation, or on a Debian 5 updated to 6 (keeping the old init.d file).

Is this correct?

Reply

12 Jack Wade October 26, 2012 at 8:40 pm

Sacco,

You are wrong.

joe@chronic ~ $ cat /etc/debian_version
6.0.6
joe@chronic ~ $ ls -l /etc/init.d/networking
-rwxr-xr-x 1 root root 2451 Apr 18 2010 /etc/init.d/networking*
joe@chronic ~ $

All of my clean Debian 6 installs use the networking service to stop/start networking. And when you upgrade from one release to another, the old packages are removed. Stuff like init scripts aren’t saved unless they’re custom and not owned by any package.

Regards

Reply

13 Giuseppe Sacco October 27, 2012 at 7:38 am

Hi Jack,
there is something strange here: my debian 6.0.6 fresh installtion does not include that file. Moreover, using the search facility in http://packages.debian.org does not show any package containing that file.

And, I do thing that this script was removed when passing to the event driven model that configure ethernet devices when they are discovered by the kernel.

I am not sure, but I think init files remain on system until the package is purged. Would you purge and reinstall the package?

Bye,
Giuseppe

Reply

14 Jack Wade October 27, 2012 at 3:42 pm

Giuseppe,

This package, netbase, has that init script: http://packages.debian.org/squeeze/all/netbase/filelist

That package is present on my squeeze and sid machines.

Reply

15 Giuseppe Sacco October 27, 2012 at 3:50 pm

Hi Jack,
thanks for your clarification. I will ask Debian site maintainer to fix the search engine (or the instructions on the web page) since it does not find anything when searching for “/etc/init.d/networking”, but it find netbase package if I search for “networking” alone.

Thanks again,
Giuseppe

Reply

16 Andy January 27, 2013 at 4:45 am

This is awesome! Thanks for the great guide!

Reply

17 glen April 15, 2013 at 4:13 am

Can you use wireless interfaces as your slave connections?

Reply

18 Ben July 17, 2013 at 11:00 am

Halo Jack,

The BEST tips ever.
I would like to bundle 10 units 3G modems for CCTV live upload.
Platform Ubuntu Server 12.04.2 LTS.
Thus, setup a bit different than Yours.
Care to give it a go?

Reply

19 chris November 8, 2013 at 5:15 am

Hi
Active_slave is not defined automatically in bond0 configuration. I get frequent request timer out.

Reply

Leave a Comment

Tagged as: , , , , , , , , , , , , , , ,

Previous post:

Next post: