Huge improve network performance by change TCP congestion control to BBR
What is BBR
BBR is Bottleneck Bandwidth and RTT. BBR congestion control computes the sending rate based on the delivery rate (throughput) estimated from ACKs.
BBR was contributed to Linux kernel 4.9 since 2016 by Google.
BBR has significantly increased throughput and reduced latency for connections on Google's internal backbone networks and google.com and YouTube Web servers.
BBR requires only changes on the sender side, not in the network or the receiver side. Thus it can be incrementally deployed on today's Internet, or in datacenters.
How to enable BBR
$ sudo apt-get install -y iperf
bbr need Linux kernel version 4.9 or above,
use uname -r
to your Linux kernel version:
$ uname -a
Linux pi3 4.19.97-v7+ #1294
List available congestion control algorithms and your current setting:
$ sysctl net.ipv4.tcp_available_congestion_control
net.ipv4.tcp_available_congestion_control = reno cubic
$ sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = cubic
To enable BBR, add following two lines to /etc/sysctl.conf
:
net.core.default_qdisc = fq # BBR must be used with fq qdisc, see note below
net.ipv4.tcp_congestion_control = bbr
Then reload /etc/sysctl.conf
:
$ sudo sysctl -p
...
...
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
Now you can double check to make sure bbr
is enabled:
$ sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = bbr
Note
BBR must be used with the fq qdisc with pacing enabled, since pacing is integral to the BBR design and implementation.
BBR without pacing would not function properly, and may incur unnecessary high packet loss rates.
TCP throughput improvement
Google Search, Youtube deployed BBR and gain TCP performance improvement.
Example performance results, to illustrate the difference between BBR and CUBIC:
-
Resilience to random loss (e.g. from shallow buffers):
Consider a netperf TCP_STREAM test lasting 30 secs on an emulated path with a 10Gbps bottleneck, 100ms RTT, and 1% packet loss rate. CUBIC gets 3.27 Mbps, and BBR gets 9150 Mbps (2798x higher).
-
Low latency with the bloated buffers common in today's last-mile links:
Consider a netperf TCP_STREAM test lasting 120 secs on an emulated path with a 10Mbps bottleneck, 40ms RTT, and 1000-packet bottleneck buffer. Both fully utilize the bottleneck bandwidth, but BBR achieves this with a median RTT 25x lower (43 ms instead of 1.09 secs).
AWS CloudFront
During March and April 2019, AWS CloudFront deployed BBR. Per AWS blog: TCP BBR Congestion Control with Amazon CloudFront:
Using BBR in CloudFront has been favorable overall, with performance gains of up to 22% improvement on aggregate throughput across several networks and regions.
Shadowsocks
I have a Shadowsocks server running on Raspberry Pi, without BBR, the client download speed is round 450 KB/s. With BBR, the client download speed improve to 3.6MB/s which is 8 times than default.
BBR v2
There is a on-going work for BBR v2 (still in alpha phase).
Reference
- BBR Project: source code, documentation etc.
- Linux kernel commit: tcp_bbr: add BBR congestion control
- man tc-fq
Feedback
Was this page helpful?
Glad to hear it!
Sorry to hear that.