How to tune TCP connections on your network
NOTE: These methods are best kept for dedicated servers with not many other applications running. For example, a dedicated Syncrify Server.
In this blog, we're going to explore and optimize some factors that affect network performance. You will be able to understand the 3 factors that affect TCP and network performance in a specific application as well as tune the setup for faster speeds.
The first thing you need to understand before tuning anything are some of the most common problem areas. Sometimes there is one problem in the chain of communication that can easily be fixed, sometimes others more attention.
Because TCP is known to be a "reliable" protocol, there are some issues that 'heal themselves' during the communication process. These 'self-healing' or 'reliable' characteristics of TCP can make it difficult to trouble shoot. So, the first step in understanding the process is to understand the elements involved, mainly:
Not all of interrelations with TCP and the elements described above are obvious or even intuitive. For example, it is true that the best performance of a TCP connection occurs when the stream is completely full of data? While you may think that the idea of a stream full of data would cause a slow network, the complete contrary is true.
With that being said, here's an explanation of how this may relate to each element in the aforementioned list:
The key is to know your network path. From there we can calculate a bunch of network information thats helpful to finding out the Bandwidth Delay Product (total bandwidth of the slowest bottleneck in the chain and also the round trip time) and the settings for OS optimization.
The calculation of Bandwidth Delay Product is as follows:
The result is almost always greater than 64 Kb.
The question now, is why is the comparison of 64 Kb to the Bandwidth Delay Product significant? Well, because by default TCP buffers employ a 64 Kb buffer that can be limiting performance.
Here's how to change some parameters based on OS:
Read and change values:
read val: sysctl [parameter]
change val: sysctl -w [parameter]=[value]
Raise the max socket buffer size:
sysctl -w kern.ipc.maxsockbuf=4000000
Increase max send and receive buffer sizes:
Set default buffer UDP and TCP sizes (/etc/sysctl.conf)
Increase TCP Window Scaling Option when buffer sizes are increased by:
verifying 'tcp_extensions="YES"' in /etc/rc.conf
Verify the window scaling option like this in /etc/sysctl.conf:
Turn off 'inflight limiting' by :
sysctl -w net.inet.tcp.inflight_enable=0
Confirm SACK (Selective Acknowledgements) is enabled:
Turn off MTU discovery if not needed with:
FOR MAC OSX
Set max socket buffer size (at least 2x the Bandwidth Delay Product)
sysctl -w kern.ipc.maxsockbuf=8000000
Set the send receive buffer sizes:
sysctl -w net.inet.tcp.sendspace=4000000 sysctl -w net.inet.tcp.recvspace=4000000