MultiPath TCP

I have used MultiPath TCP (MPTCP) to improve internet access at my parent’s home (and our family business). In short, they have two internet connections, from two different Wireless ISPs, both of which are slow and unreliable. Using two servers running linux with MPTCP, I’m able to intercept all outgoing TCP connections, convert them to MPTCP, route the sub-flows over both ISPs, intercept them again at a high-bandwidth location, and finally connect the the originally intended internet server. This setup allows all internet-bound TCP connections from my parent’s network to utilize both ISPs without either the client or server supporting MPTCP.

This page is currently incomplete, and has some random notes that I still need to explain.

Installation

  1. Install Ubuntu Server
  2. Install MPTCP by installing from the apt-repository


Linux TCP/IP tuning

/etc/sysctl.d/60-mptcp.conf

net.ipv4.ip_local_port_range = 10000    65535
net.nf_conntrack_max = 50000

/etc/security/limits.d/999999_files.conf

*	soft	nofile	999999
*	hard	nofile	999999

/etc/init/tcp-intercept.conf

Add the following:

limit nofile 100000 100000


The initial approach: using a SOCKS proxy

Redsocks is a tool that, like tcp-intercept, transparently intercepts TCP connections, but then connects to a SOCKS proxy to complete the TCP connection.

Using Redsocks and SSH tunneling

My very first approach at using MPTCP was to create a persistent SSH tunnel between the two MPTCP-enabled nodes. This SSH tunnel could be connected with or without using VPN, and used autossh to detect failures and reconnect. This approach actually worked quite well, minus one small issue: Using the SSH tunnel meant the every TCP connection was actually multiplexed through the single TCP connection of SSH. This meant that if one application was saturating the link by downloading a large file or something, the TCP bufferers would fill and drastically hurt the responsiveness of all other TCP connections. For example, if a person tried to load a webpage over the MPTCP link, it look a long time (2 to 3 seconds), just to connect to the server, because the request had to wait in the TCP queues of the SSH tunnel. It also hurt the interactivity of SSH sessions over the WAN link, because it took seconds to receive character echos.

Using Redsocks and Dante

In trying to improve upon the issues of the SSH tunnel, the logical next step was to try creating a new MPTCP connection over the WAN for each intercepted TCP connection. So instead of configuring redsocks to connect to SSH's local-listening SOCKS server, I simply configured it to connect a SOCKS server running on the remote MPTCP node. Dante is the server I used for this, although I found simply SSHing to localhost to create a SOCKS server was often the simplest way. This approach resolved the queue-blocking issue I had with SSH, but now all TCP connections took longer to connect, because the SOCKS protocol has to make at least one extra round trip over the WAN to tell the remote MPTCP node which destination server to connect to.

Links