How To Setup Bonded (bond0) and Bridged (br0) Networking On Ubuntu LTS Server

Posted on in Categories , , last updated May 3, 2017

I want to setup KVM bridge with bonding on Ubuntu Linux 16.04 LTS server. I have total four Intel I350 Gigabit network connection (NICs). I would like bonding/enslaving eth0 through eth2 into one bonded interface called bond0 with 802.3ad dynamic link aggregation mode. How do I configure bridging and bonding with Ubuntu Server 16.04 LTS?

You need to set up the bridging so that KVM/XEN or LXC containers based virtual machines (guests) to show up on the same network as the host server. Bridged configuration needs bridge-utils installed. Bonded configuration need ifenslave utility. In this tutorial, you will learn how to create bonded and bridged networking on Ubuntu 16.04 LTS server.
Fig.01: Sample setup - KVM bridge with Bonding on Ubuntu LTS Server
Fig.01: Sample setup – KVM bridge with Bonding on Ubuntu LTS Server

Install ifenslave on Ubuntu

Type the following command:
$ sudo apt install ifenslave bridge-utils

Bridge with Bonding on Ubuntu

Backup your /etc/network/interfaces file, run:
$ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup
Edit /etc/network/interfaces, run:
$ sudo vi /etc/network/interfaces
First create, bond0 interface config without an IP address and enslave eth0 and eth2 as follows:

auto bond0
iface bond0 inet manual
bond-lacp-rate 1
post-up ifenslave bond0 eth0 eth2
pre-down ifenslave -d bond0 eth0 eth2
bond-slaves none
bond-mode 4
bond-lacp-rate fast
bond-miimon 100
bond-downdelay 0
bond-updelay 0
bond-xmit_hash_policy 1

Next edit/update eth0 without an IP address for bond master bond0:

auto eth0
iface eth0 inet manual
bond-master bond0
 
auto eth2
iface eth2 inet manual
bond-master bond0

Finally, create br0 bridge and assign an IP address and other IP settings including gateway:

auto br0
iface br0 inet static
address 10.86.115.66
netmask 255.255.255.192
broadcast 10.86.115.127
gateway 10.86.115.65
# ------------------------------------------
# Example: set dns server too
# dns-nameservers 8.8.8.8 8.8.4.4 10.86.115.1
# ------------------------------------------
# Static route example 
#up route add -net 10.0.0.0/8 gateway 10.86.115.65
#down route del -net 10.0.0.0/8
#up route add -net 161.26.0.0/16 gateway 10.86.115.65
#down route del -net 161.26.0.0/16
# ------------------------------------------
# Want to know what the default and more info 
# on the following options? 
# Read brctl(8) man page
# ------------------------------------------
bridge_ports bond0
bridge_stp off
bridge_fd 9
bridge_hello 2
bridge_maxage 12

Save and close the file. Reboot the server or restart the networking service:
$ sudo systemctl restart networking
Verify it:
$ brctl show
Sample outputs:

bridge name	bridge id		STP enabled	interfaces
br0		8000.0025904fb06c	no		bond0

See bond0 status and other info:
$ more /proc/net/bonding/bond0
Sample outputs:

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
 
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
 
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:25:90:4f:b0:6c
Active Aggregator Info:
	Aggregator ID: 1
	Number of ports: 1
	Actor Key: 9
	Partner Key: 43
	Partner Mac Address: b0:fa:eb:13:97:00
 
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:4f:b0:6c
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:25:90:4f:b0:6c
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: b0:fa:eb:13:97:00
    oper key: 43
    port priority: 32768
    port number: 300
    port state: 61
 
Slave Interface: eth2
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:25:90:4f:b0:6e
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:25:90:4f:b0:6c
    port key: 0
    port priority: 255
    port number: 2
    port state: 71
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 1

Make sure routing is up and correct:
$ ip r show
$ ping google.com
$ ping www.cyberciti.biz

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter, Facebook, Google+.

5 comment

  1. Hello.

    In your sample output i see different “Aggregator ID” for member of bond interface. It is normal? I mean i have bond of two 1G interfaces and i have also different “Aggregator ID”. And got speed of bond interface is 1G instead 2G.

    1. What you have pointed out is correct . LACP isnt working fine .

      Active Aggregator Info:
      Aggregator ID: 1
      Number of ports: 1 –> Number of devices participated in LACP .( failed status )
      Different Aggregator ID means that LACP isnt working .
      Aggregator ID: 1
      Aggregator ID: 2

  2. Hi,
    I am using CentOS 6 I want configure post-fix for multiple IPs but not configure any idea about that so please update me

  3. What you have pointed out is not correct . LACP isnt working fine .

    Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 1 –> Number of devices participated in LACP .( failed status )
    Different Aggregator ID means that LACP isnt working .
    Aggregator ID: 1
    Aggregator ID: 2

Comments are closed.