View blogs | Login

TCP Tuning

How to tune TCP connections on your network

NOTE: These methods are best kept for dedicated servers with not many other applications running.  For example, a dedicated Syncrify Server.

In this blog, we're going to explore and optimize some factors that affect network performance.  You will be able to understand the 3 factors that affect TCP and network performance in a specific application as well as tune the setup for faster speeds.  

The first thing you need to understand before tuning anything are some of the most common problem areas.  Sometimes there is one problem in the chain of communication that can easily be fixed, sometimes others more attention.  

Because TCP is known to be a "reliable" protocol, there are some issues that 'heal themselves' during the communication process.  These 'self-healing' or 'reliable' characteristics of TCP can make it difficult to trouble shoot.  So, the first step in understanding the process is to understand the elements involved, mainly:

  • The Computer: Where an application runs in the context of operating system
  • The Application:  The software itself
  • The Network:  The steps (hops) the data takes to get to the destination from the source

Not all of interrelations with TCP and the elements described above are obvious or even intuitive. For example, it is true that the best performance of a TCP connection occurs when the stream is completely full of data?  While you may think that the idea of a stream full of data would cause a slow network, the complete contrary is true.  

With that being said, here's an explanation of how this may relate to each element in the aforementioned list:

  • The Application:  Some software can perform below par on network paths that are moderately to extremely long because there is something called a speed of light delay overlap that must be implemented to do so.  If software is limiting the amount of data on the network as a result, performance is impeded.
  • The Network: The network path in TCP will experience flaws which are compensated for when encountered.  The time in which these flaws are compensated is proportionate to the RTT or round trip time of the communication.  This can add up when the scale increases.

The key is to know your network path.  From there we can calculate a bunch of network information thats helpful to finding out the Bandwidth Delay Product (total bandwidth of the slowest bottleneck in the chain and also the round trip time) and the settings for OS optimization.

The calculation of Bandwidth Delay Product is as follows:

  • Issue a ping command to the target and get the LONGEST time in the last field i.e time=11.7065 ms
  • Multiply by 1,000,000,000 bytes  or 1 byte
  • This will give you the total potential amount of data that can be in transit on the network at a given time. 

The result is almost always greater than 64 Kb.

The question now, is why is the comparison of 64 Kb to the Bandwidth Delay Product significant?  Well, because by default TCP buffers employ a 64 Kb buffer that can be limiting performance.

Here's how to change some parameters based on OS:

Linux BSD:

Read and change values:

read val: sysctl [parameter]

change val: sysctl -w [parameter]=[value]

Raise the max socket buffer size:

sysctl -w kern.ipc.maxsockbuf=4000000


Increase max send and receive buffer sizes:



Set default buffer UDP and TCP sizes (/etc/sysctl.conf)




Increase TCP Window Scaling Option when buffer sizes are increased by:

verifying 'tcp_extensions="YES"' in /etc/rc.conf

Verify the window scaling option like this in /etc/sysctl.conf:


Turn off 'inflight limiting' by :

sysctl -w net.inet.tcp.inflight_enable=0

Confirm SACK (Selective Acknowledgements) is enabled:


Turn off MTU discovery if not needed with:



Set max socket buffer size (at least 2x the Bandwidth Delay Product)

sysctl -w kern.ipc.maxsockbuf=8000000 

Set the send receive buffer sizes:

sysctl -w net.inet.tcp.sendspace=4000000 sysctl -w net.inet.tcp.recvspace=4000000 

Created on: Oct 15, 2014
Last updated on: Jun 23, 2022


Your email address will not be published.


Social Media

Powered by