View blogs | Login

TCP Tuning

How to tune TCP connections on your network


NOTE: These methods are best kept for dedicated servers with not many other applications running.  For example, a dedicated Syncrify Server.



In this edition, we're exploring and optimizing some factors that affect network performance.  You will be able to understand the 3 factors that affect TCP and network performance in a specific application and also a few ways to tune the setup for faster speeds.  


The first thing we need to understand before we tune anything are some of the most common problem areas.  Sometimes there is one problem in the chain of communication that is easily fixed, sometimes many which need more attention.  Because TCP is stated to be a "reliable" protocol, there are some issues that 'heal themselves' during the communication process.  These 'self-healing' or 'reliable' characteristics of TCP can make it difficult to trouble shoot.  So, the first step in understanding the process is to understand the elements involved, mainly:


  • The Computer: Where an application runs in the context of operating system
  • The Application:  The software itself
  • The Network:  The steps (hops) the data takes to get to the destination from the source

Not all of interrelations with TCP and the elements described above are obvious or even intuitive. For example, it is true that the best performance of a TCP connection occurs when the stream is completely full of data.  While you may think that the idea of a stream full of data would cause a slow network, the complete contrary is true.  With that said, here's an explanation of how this may relate to each element in the aforementioned list:


  • The Application:  Some software can perform below par on network paths that are moderately to extremely long because there is something called a speed of light delay overlap that must be implemented to do so.  If software is limiting the amount of data on the network as a result, performance is impeded.
  • The Network: The network path in TCP will experience flaws which are compensated for when encountered.  The time in which these flaws are compensated is proportionate to the RTT or round trip time of the communication.  This can add up when the scale increases.



The key is to know your network path.  From that we can calculate a bunch of network information thats helpful to finding out the Bandwidth Delay Product (total bandwidth of the slowest bottleneck in the chain and also the round trip time) and hence the settings for OS optimization.


To calculate Bandwidth Delay Product do the following:


  • Issue a ping command to the target and get the LONGEST time in the last field like i.e time=11.7065 ms


  • Multiply by 1,000,000,000 bytes  or 1 byte


  • This will give you the total potential amount of data that can be in transit on the network at a given time. 


Whats the result?  Greater than 64 Kb?  I can almost guarantee that it is.


The question now, is why is the comparison of 64 Kb to the Bandwidth Delay Product significant?  Well, because by default TCP buffers employ a 64 Kb buffer that can be limiting performance! 


Here's how to change some parameters based on OS:


Linux BSD:


Read and change values:

read val: sysctl [parameter]

change val: sysctl -w [parameter]=[value]



Raise the max socket buffer size:


sysctl -w kern.ipc.maxsockbuf=4000000


 

Increase max send and receive buffer sizes:

net.inet.tcp.sendbuf_max=16777216

net.inet.tcp.recvbuf_max=16777216


Set default buffer UDP and TCP sizes (/etc/sysctl.conf)


net.inet.tcp.sendspace

net.inet.tcp.recvspace

net.inet.udp.recvspace



Increase TCP Window Scaling Option when buffer sizes are increased by:

verifying 'tcp_extensions="YES"' in /etc/rc.conf

Verify the window scaling option like this in /etc/sysctl.conf:

net.inet.tcp.rfc1323


Turn off 'inflight limiting' by :


sysctl -w net.inet.tcp.inflight_enable=0


Confirm SACK (Selective Acknowledgements) is enabled:


net.inet.tcp.sack.enable



Turn off MTU discovery if not needed with:

net.inet.tcp.path_mtu_discovery







FOR MAC OSX


Set max socket buffer size (at least 2x the Bandwidth Delay Product)


sysctl -w kern.ipc.maxsockbuf=8000000 



Set the send receive buffer sizes:


sysctl -w net.inet.tcp.sendspace=4000000 sysctl -w net.inet.tcp.recvspace=4000000 




That concludes this post on tuning your TCP connections.  You can find 3rd party tools to do this optimization and analyzation, both paid and free.  Thanks for stopping by.





Created on: 10/15/14 4:58 PM
Last updated on: 10/15/14 5:00 PM

Navigation

Social Media

Powered by 10MinutesWeb.com