I‘ve two servers located in two different data center. Both server deals with a lot of concurrent large file transfers. But network performance is very poor for large files and performance degradation take place with a large files. How do I tune TCP under Linux to solve this problem?

By default the Linux network stack is not configured for high speed large file transfer across WAN links. This is done to save memory resources. You can easily tune Linux network stack by increasing network buffers size for high-speed networks that connect server systems to handle more network packets.

The default maximum Linux TCP buffer sizes are way too small. TCP memory is calculated automatically based on system memory; you can find the actual values by typing the following commands:
$ cat /proc/sys/net/ipv4/tcp_mem
The default and maximum amount for the receive socket memory:
$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max

The default and maximum amount for the send socket memory:
$ cat /proc/sys/net/core/wmem_default
$ cat /proc/sys/net/core/wmem_max

The maximum amount of option memory buffers:
$ cat /proc/sys/net/core/optmem_max

Tune values

Set the max OS send buffer size (wmem) and receive buffer size (rmem) to 12 MB for queues on all protocols. In other words set the amount of memory that is allocated for each TCP socket when it is opened or created while transferring files:

WARNING! The default value of rmem_max and wmem_max is about 128 KB in most Linux distributions, which may be enough for a low-latency general purpose network environment or for apps such as DNS / Web server. However, if the latency is large, the default size might be too small. Please note that the following settings going to increase memory usage on your server.

# echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf

You also need to set minimum size, initial size, and maximum size in bytes:
# echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf

Turn on window scaling which can be an option to enlarge the transfer window:
# echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
Enable timestamps as defined in RFC1323:
# echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf
Enable select acknowledgments:
# echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf
By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions. Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.
# echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
# echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf
Now reload the changes:
# sysctl -p
Use tcpdump to view changes for eth0:
# tcpdump -ni eth0

Recommend readings:

🥺 Was this helpful? Please add a comment to show your appreciation or feedback.

nixCrat Tux Pixel Penguin
Hi! 🤠
I'm Vivek Gite, and I write about Linux, macOS, Unix, IT, programming, infosec, and open source. Subscribe to my RSS feed or email newsletter for updates.

42 comments… add one
  • max Jul 17, 2014 @ 16:42

    Big thanks. It fixes my server performance issues, such as, TCP Dup Ack and TCP out-of-order problems.

  • Sepahrad Salour Mar 1, 2015 @ 16:52

    That was great and simple article… The exact way to calculate TCP Buff size:

    TCP Buffer Size= *
    eg.

    20 (ms) * 100 (Mbps) = 0.02 * 100 / 8 * 1024 = 256 KB

  • Kanagavelu Jun 10, 2015 @ 6:12

    Does this required restart of the linux?

  • Chetan Jul 8, 2015 @ 14:43

    How to do the same process for UDP?
    I am using UDP for trasfering RTSP stream from server to client over a WiFi. I am also getting a stream on my Client laptop but then after 1 minute the whole network is getting collapsed and it says “No buffer space available” on Server PC.
    As there is no network, RTSP stream can not be available.
    Please suggest me some solution.

  • Zohar Sabari Aug 19, 2015 @ 16:50

    There is an error in net.ipv4.tcp_mem, the value is measured in pages sizes (4k) not in single bytes. I haven’t checked for the udp counterpart.

  • Cauliflower Feb 23, 2016 @ 20:26

    Useful looking article. Our situation is that we have 10Gb links to two Linux servers, we want them to talk to each other across a 10Gb network; it’s all within the same AS. Our users want to send large amounts of data across the network periodically (eg every day). The tool they are currently using is rsync, and test results are pretty poor. There seem to be two major factors affecting this (taking contention across the shared network for granted) – OpenSSH’s own windowing (for which we are looking at HPN-SSH as a solution), and spikes in traffic which cause TCP sawtoothing . Would you think your solution here would be suitable for our uses?

    We are looking to address the contention issues on the backbone with QoS rules.

    • Helge Oct 14, 2016 @ 8:18

      Use GridFTP

  • Jan Syren Feb 21, 2017 @ 7:52

    Hello and thanks for a great article.

    I wonder if you have any experience with the tc qdisc netem functionality?
    I am trying to set very specific delays in the microsecond range, and I need the spread around the set time to be quite accurate. But as it is at the moment we have a bit higher percentage within the delay we have set, and then a long falloff curve with longer delays. So it seems as if there is a buffer that doesn’t empty properly. Is there a way to actually monitor the buffers, send out usage statistics to a file etc..

    We are currently using an Emulex oce14102 and the delay is set on a bridge between the two ports of the card. Link speed is 10gbps. If we don’t use netem the bridge itself has good statistics, and when we are sending in just one direction it is also better but you start noticing the falloff curve.

Leave a Reply

Your email address will not be published. Required fields are marked *

Use HTML <pre>...</pre> for code samples. Your comment will appear only after approval by the site admin.