You need to set up the bridging so that KVM/XEN or LXC containers based virtual machines (guests) to show up on the same network as the host server. Bridged configuration needs bridge-utils installed. Bonded configuration need ifenslave utility. In this tutorial, you will learn how to create bonded and bridged networking on Ubuntu 16.04 LTS server.
Fig.01: Sample setup – KVM bridge with Bonding on Ubuntu LTS Server
Install ifenslave on Ubuntu
Type the following command:
$ sudo apt install ifenslave bridge-utils
Bridge with Bonding on Ubuntu
Backup your /etc/network/interfaces file, run:
$ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup
Edit /etc/network/interfaces, run:
$ sudo vi /etc/network/interfaces
First create, bond0 interface config without an IP address and enslave eth0 and eth2 as follows:
auto bond0 iface bond0 inet manual bond-lacp-rate 1 post-up ifenslave bond0 eth0 eth2 pre-down ifenslave -d bond0 eth0 eth2 bond-slaves none bond-mode 4 bond-lacp-rate fast bond-miimon 100 bond-downdelay 0 bond-updelay 0 bond-xmit_hash_policy 1
Next edit/update eth0 without an IP address for bond master bond0:
auto eth0 iface eth0 inet manual bond-master bond0 auto eth2 iface eth2 inet manual bond-master bond0
Finally, create br0 bridge and assign an IP address and other IP settings including gateway:
auto br0 iface br0 inet static address 10.86.115.66 netmask 255.255.255.192 broadcast 10.86.115.127 gateway 10.86.115.65 # ------------------------------------------ # Example: set dns server too # dns-nameservers 8.8.8.8 8.8.4.4 10.86.115.1 # ------------------------------------------ # Static route example #up route add -net 10.0.0.0/8 gateway 10.86.115.65 #down route del -net 10.0.0.0/8 #up route add -net 161.26.0.0/16 gateway 10.86.115.65 #down route del -net 161.26.0.0/16 # ------------------------------------------ # Want to know what the default and more info # on the following options? # Read brctl(8) man page # ------------------------------------------ bridge_ports bond0 bridge_stp off bridge_fd 9 bridge_hello 2 bridge_maxage 12
Save and close the file. Reboot the server or restart the networking service:
$ sudo systemctl restart networking
Verify it:
$ brctl show
Sample outputs:
bridge name bridge id STP enabled interfaces br0 8000.0025904fb06c no bond0
See bond0 status and other info:
$ more /proc/net/bonding/bond0
Sample outputs:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer3+4 (1) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 00:25:90:4f:b0:6c Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 9 Partner Key: 43 Partner Mac Address: b0:fa:eb:13:97:00 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:90:4f:b0:6c Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 00:25:90:4f:b0:6c port key: 9 port priority: 255 port number: 1 port state: 63 details partner lacp pdu: system priority: 32768 system mac address: b0:fa:eb:13:97:00 oper key: 43 port priority: 32768 port number: 300 port state: 61 Slave Interface: eth2 MII Status: down Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 00:25:90:4f:b0:6e Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: churned Partner Churn State: churned Actor Churned Count: 1 Partner Churned Count: 1 details actor lacp pdu: system priority: 65535 system mac address: 00:25:90:4f:b0:6c port key: 0 port priority: 255 port number: 2 port state: 71 details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 1
Make sure routing is up and correct:
$ ip r show
$ ping google.com
$ ping www.cyberciti.biz
- Debian Linux: Configure Network Interfaces As A Bridge / Network Switch
- OpenBSD: Configure Network Interface As A Bridge / Network Switch
- How To PFSense Configure Network Interface As A Bridge / Network Switch
- FreeBSD: NIC Bonding / Link Aggregation / Trunking / Link Failover
- How To Setup Bridge (br0) Network on Ubuntu Linux 14.04 and 16.04 LTS
- Ubuntu setup a bonding device and enslave eth0+eth2
- Setup Bonded (bond0) and Bridged (br0) Networking On Ubuntu
- Ubuntu 20.04 add network bridge (br0) with nmcli command
- CentOS 8 add network bridge (br0) with nmcli command
- How to add network bridge with nmcli (NetworkManager) on Linux
- Set up and configure network bridge on Debian Linux
🐧 Get the latest tutorials on Linux, Open Source & DevOps via:
- RSS feed or Weekly email newsletter
- Share on Twitter • Facebook • 5 comments... add one ↓
Category | List of Unix and Linux commands |
---|---|
File Management | cat |
Firewall | Alpine Awall • CentOS 8 • OpenSUSE • RHEL 8 • Ubuntu 16.04 • Ubuntu 18.04 • Ubuntu 20.04 |
Network Utilities | dig • host • ip • nmap |
OpenVPN | CentOS 7 • CentOS 8 • Debian 10 • Debian 8/9 • Ubuntu 18.04 • Ubuntu 20.04 |
Package Manager | apk • apt |
Processes Management | bg • chroot • cron • disown • fg • jobs • killall • kill • pidof • pstree • pwdx • time |
Searching | grep • whereis • which |
User Information | groups • id • lastcomm • last • lid/libuser-lid • logname • members • users • whoami • who • w |
WireGuard VPN | Alpine • CentOS 8 • Debian 10 • Firewall • Ubuntu 20.04 |
thanks a lot, it helps me to setup an r710
Hello.
In your sample output i see different “Aggregator ID” for member of bond interface. It is normal? I mean i have bond of two 1G interfaces and i have also different “Aggregator ID”. And got speed of bond interface is 1G instead 2G.
What you have pointed out is correct . LACP isnt working fine .
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1 –> Number of devices participated in LACP .( failed status )
Different Aggregator ID means that LACP isnt working .
Aggregator ID: 1
Aggregator ID: 2
Hi,
I am using CentOS 6 I want configure post-fix for multiple IPs but not configure any idea about that so please update me
What you have pointed out is not correct . LACP isnt working fine .
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1 > Number of devices participated in LACP .( failed status )
Different Aggregator ID means that LACP isnt working .
Aggregator ID: 1
Aggregator ID: 2