Why thirteen is lucky for broadband speed tweaking
Net nostalgia: One of my early columns for PC World explained how you could speed up your internet connection by tweaking some low-level network settings within the operating system. This was thirteen years ago (gulp), just as DSL arrived in New Zealand.
Few people could afford "D$L" back then, thanks to Telecom’s outrageously high pricing and internet connectivity was primarily via 56k modem.
There was quite a bit of mileage to be had from tweaking, providing you were willing to dig through obscure Control Panel and Registry settings. Kiwis were keen tweakers mainly because much of the internet is such a long way from New Zealand. Internet data packets take a while to reach their destination here and this inevitably affects performance.
DSL made things easier, and we no longer worry about modem UART buffers and similar tech-arcana. In fact, today’s broadband is dead simple to operate in comparison.
The big difference between now and then is that line speeds have increased for most people. I was rapt at seeing speeds of 1.8Mbit/sec in November 1999 from local servers even though the first-generation Nokia M10 ADSL modem actually connected at 5.3Mbit/sec.
Fast forward to 2012, and ADSL2+ means 15 to 20Mbit/sec downloads for many people (but only 1Mbit/sec up). My VDSL2 connection is set to 70Mbit/sec down and 10Mbit/sec up.
Thank goodness, New Zealand ISPs have built their networks and improved quality and performance hugely to match the higher speeds - I don’t think anyone uses high-latency satellite links for DSL traffic backhaul as they did back then.
We still have that distance problem to deal with in New Zealand however. Ping tells me that the US West Coast is 140ms away and data packet round trip times to Europe are over 300ms.
The internet protocol used for the vast majority of data, Transmission Control Protocol (TCP), requires confirmation messages before itsends packets to be received. While it waits for those messages, TCP simply stops. This isn’t much of a problem on a low-latency local network, but once round trip times hit 50-150ms, TCP just can’t keep the data pipe full without compensating for the big latency somehow. For optimum performance to deal with packet latency of 200ms, I would have to set a receive window buffer for my connection well over a megabyte - much more than TCP’s original 65,536-byte maximum.
Making sure TCP worked well on high-latency networks used to be a manual process that involved poking around in the Registry on Windows or setting sysctls in *BSD. Such ju-ju is, I’m pleased to report, no longer necessary. Windows 7 has an auto-tuning TCP/IP stack that identifies high-latency connections and enables the right settings, ditto Apple OS X.
Testing is easier too, thanks to local and overseas Measurementlab.net nodes. These are a joint effort by Google, Skype, Amazon, Bittorrent and others and provide some very handy open source tools with which to diagnose your internet connection.
The good news: with a new TCP stack, you’ll get good performance despite high latency.
I tested with DiffProbe and other tools against servers in Sweden (305ms) and elsewhere overseas and hit 56 to 60Mbit/sec down, and 9.5Mbit/sec up. Pipe full, in other words.
Modern TCP seems to work and won’t - as I’ve heard a surprising number of people state - limit your throughput from the US to around 9Mbit/s. The fact that you don’t get full speed from your broadband connection is more likely due to other choke points than TCP. ISPs rate-limit certain traffic, webservers often can’t dish stuff up fast enough and older routers can’t handle new TCP features and ignore them, putting everything into go-slow mode.
This means that we should fix broken stuff on our internet so we’re not disappointed when the UltraFast Broadband connections become available. This could mean anything from a router upgrade to a new version of your operating system. Don’t expect superfast broadband through Windows XP and early-generation Wi-Fi for instance.
To some extent, you can work around the above issues with caches and content delivery networks (CDNs) that shorten paths and reduce latency. That way your box won’t be talking to the actual server overseas, but instead fetches stuff locally.
However, these break the end-to-end principle of the internet and create their own set of headaches. For example, Vodafone and TelstraClear’s transparent caches prevent their customers from using anti-geoblocking services such as unblock-us.com.
Telco and ISP gear vendors will take out a contract on me for saying this, but to blithely spend hundreds of millions of dollars on caches and content delivery networks around the country to deal with what’s essentially a maintenance issue doesn’t make as much sense as getting to the root of the performance problem. UFB providers may wish to think about this.