Increase Linux Internet speed with TCP BBR congestion control

I recently read that TCP BBR has significantly increased throughput and reduced latency for connections on Google’s internal backbone networks and google.com and YouTube Web servers throughput by 4 percent on average globally – and by more than 14 percent in some countries. The TCP BBR patch needs to be applied to the Linux kernel. The first public release of BBR was here, in September 2016. The patch is available to any one to download and install. Another option is using Google Cloud Platform (GCP). GCP by default turned on to use a cutting-edge new congestion control algorithm named TCP BBR. This page explains how to boost your Linux server’s Internet speed with TCP BBR configurations.

ADVERTISEMENTS

Requirements for Linux server Internet speed with TCP BBR

Make sure that your Linux kernel has the following option compiled as either module or inbuilt into the Linux kernel:

  1. CONFIG_TCP_CONG_BBR
  2. CONFIG_NET_SCH_FQ

Linux TCP BBR congestion control

You must use the Linux kernel version 4.9 or above. On a Debian/Ubuntu Linux type the following grep command/egrep command:
$ grep 'CONFIG_TCP_CONG_BBR' /boot/config-$(uname -r)
$ grep 'CONFIG_NET_SCH_FQ' /boot/config-$(uname -r)
$ egrep 'CONFIG_TCP_CONG_BBR|CONFIG_NET_SCH_FQ' /boot/config-$(uname -r)

Sample outputs:

Fig.01: Make sure that your Linux kernel has TCP BBR option setup

Fig.01: Make sure that your Linux kernel has TCP BBR option setup

I am using Linux kernel version 4.9.0-8-amd64 on a Debian and 4.18.0-15-generic on an Ubuntu server. If above options not found, you need to either compile latest kernel or install the latest version of Linux kernel using the apt-get command/apt command.

Run test before you enable TCP BBR to improve network speed

Type the following command on Linux server:
# iperf -s
How to enable TCP BBR to improve network speed on Linux server test
Execute the following on your Linux client:
$ iperf -c gcvm.backup -i 2 -t 30
How to Boost Linux Server Internet Speed with TCP BBR

How to enable TCP BBR congestion control on Linux

Edit the /etc/sysctl.conf file or create a new file in /etc/sysctl.d/ directory:
$ sudo vi /etc/sysctl.conf
OR
$ sudo vi /etc/sysctl.d/10-custom-kernel-bbr.conf
Append the following two lines:
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr

Save and close the file i.e. exit from the vim/vi text editor by typing :x!. Next you must either reboot the Linux box or reload the changes using the sysctl command:
$ sudo reboot
OR
$ sudo sysctl --system
Sample outputs:

* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-custom.conf ...
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-lxd-inotify.conf ...
fs.inotify.max_user_instances = 1024
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...

You can verify new settings with the following sysctl command. Run:
$ sysctl net.core.default_qdisc
net.core.default_qdisc = fq
$ sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = bbr

Test BBR congestion control on Linux

In my testing between two long distance Linux server with Gigabit ports connected to the Internet, I was able to bump 250 Mbit/s into 800 Mbit/s. You can use tools such as the wget command to measure bandwidths speed:
$ wget https://your-server-ip/file.iso
I also noticed I was able to push almost 100 Mbit/s for my OpenVPN traffic. Previously I was able to push up to 30-40 Mbit/s only. Overall I am quite satisfied with TCP BBR congestion control option for my Linux box.

Linux TCP BBR test with iperf

The iperf is a commonly used network testing tool for TCP/UDP data streams. It measures the throughput of the network. This tool can validate the importance of Linux TCP BBR settings.

Type command on Linux server with TCP BBR congestion control enables

# iperf -s
Sample outputs:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.128.0.2 port 5001 connected with AAA.BB.C.DDD port 46978
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.6 sec   127 MBytes  34.7 Mbits/sec

Type command on Linux/Unix client

$ iperf -c YOUR-Linux-Server-IP-HERE -i 2 -t 30
Sample output when connected to TCP BBR congestion enabled on Linux:

------------------------------------------------------------
Client connecting to gcp-vm-nginx-www1, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 10.8.0.2 port 46978 connected with xx.yyy.zzz.tt port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec  4.00 MBytes  16.8 Mbits/sec
[  3]  2.0- 4.0 sec  8.50 MBytes  35.7 Mbits/sec
[  3]  4.0- 6.0 sec  10.9 MBytes  45.6 Mbits/sec
[  3]  6.0- 8.0 sec  16.2 MBytes  68.2 Mbits/sec
[  3]  8.0-10.0 sec  5.29 MBytes  22.2 Mbits/sec
[  3] 10.0-12.0 sec  9.38 MBytes  39.3 Mbits/sec
[  3] 12.0-14.0 sec  8.12 MBytes  34.1 Mbits/sec
[  3] 14.0-16.0 sec  8.12 MBytes  34.1 Mbits/sec
[  3] 16.0-18.0 sec  8.38 MBytes  35.1 Mbits/sec
[  3] 18.0-20.0 sec  6.75 MBytes  28.3 Mbits/sec
[  3] 20.0-22.0 sec  8.12 MBytes  34.1 Mbits/sec
[  3] 22.0-24.0 sec  8.12 MBytes  34.1 Mbits/sec
[  3] 24.0-26.0 sec  9.50 MBytes  39.8 Mbits/sec
[  3] 26.0-28.0 sec  7.00 MBytes  29.4 Mbits/sec
[  3] 28.0-30.0 sec  8.12 MBytes  34.1 Mbits/sec
[  3]  0.0-30.3 sec   127 MBytes  35.0 Mbits/sec

Conclusion

Bottleneck Bandwidth and RTT (BBR) congestion control pre and post average stats collected for 30 seconds using the iperf command:

  1. PRE BBR: Transfer: 27.5 MBytes. Bandwidth: 7.15 Mbits/sec
  2. POST BBR: Transfer: 127 MBytes. Bandwidth: 35.0 Mbits/sec

BBR is, in my opinion, one of the most critical improvements to Linux networking stacks in recent years. This page demonstrated how to enable and set up BBR on Linux based system. For more information see the following pages:

🐧 If you liked this page, please support my work on Patreon or with a donation.
🐧 Get the latest tutorials on SysAdmin, Linux/Unix, Open Source & DevOps topics via:
CategoryList of Unix and Linux commands
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network Utilitiesdig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04

ADVERTISEMENTS
14 comments… add one
  • Ivan Baldo Jul 22, 2017 @ 0:53

    I am using it with fq_codel instead of fq, isn’t that better?
    Maybe you could try to benchmark with it to see if there is any difference.
    Otherwise, nice article!

  • KenP Jul 22, 2017 @ 6:37

    So is this server-side only? Will it make any difference if I compiled the kernel on my desktop linux box at hone? Stupid question?

    • 🐧 Vivek Gite Jul 22, 2017 @ 13:54

      No. It is server side only.

      • KenP Jul 24, 2017 @ 4:09

        Thanks Vivek

      • ren Jul 26, 2017 @ 16:20

        I know you already answered this but, I was reading cerowrt’s article above (the one posted by johnp) and saw the tcp_upload test showing stable bandwidth and more mb/s for bbr vs cubic (the default in several desktop distros)… so wouldn’t this be useful for desktops too?

  • meh Jul 22, 2017 @ 15:35

    uhm well… i’m not sure it’s actually better than cubic after reading this http://blog.cerowrt.org/post/a_bit_about_bbr/

  • mimi89999 Jul 22, 2017 @ 16:53

    Is that good for a Raspberry Pi running as a home server with a 60/6 Mbit/s Internet link? Would this improve performance?

  • Bogdan Stoica Jul 24, 2017 @ 10:22

    I have tried that but the download/upload speed is garbage. I was able to download with about 30 MB/sec using kernel 3.x and without TCP BBR activated. Installed kernel 4.x and enabled TCP BBR but the download speed was about 1MB/sec (from the exact same source). So I have decided to remove kernel 4.x and stick to the old 3.x kernel.

    • TJ Jul 24, 2017 @ 11:12

      Are you using a dedicated server or cloud server/vps? May I know your distro version?

      • Bogdan Stoica Jul 24, 2017 @ 11:16

        Tried on a vps first, CentOS 7, gigabit connection. It might be because of the settings on the dedicated server where the vps was created (for testing purposes only). I will check more anyway.

  • Bill Aug 30, 2020 @ 2:03

    Applied this fix on my server from Soyoustart, and sure enough my raw throughput went up from 40 Mbps (at best) to 180+ Mbps. Sadly, SSHFS and SMB still behave like crap over Wireguard; the performance has slightly gone up in terms of throughput, and the latency is definitely better now. Still, remote mount points still seem like a distant dream. Thanks for this!

    • 🐧 Vivek Gite Aug 30, 2020 @ 4:45

      Those two protocols will be slow as compared to HTTP/HTTPS and raw speed. Also look into MTU settings for WireGuard. It must be correct.

Leave a Reply

Your email address will not be published. Required fields are marked *

Use HTML <pre>...</pre>, <code>...</code> and <kbd>...</kbd> for code samples.