Recently, I’ve had to deal with files of sizes in the region of 50-100BGB, whereas normally a big file would be 2-5GB or so. The problems that you’d expect with this kind of file size is in opening them, compressing and uncompressing them and moving them.
The Xeon W3550 CPU and 12GB of DDR-3 RAM in my HP Z400 workstation crunch through those gigabytes of data without breaking a sweat. Ditto the new SATA-3 OCZ Vertex 3 solid state drive with its 550Mbyte/s reads and writes. Compressing and decompressing even massive files is fast thanks to the quick I/O and multi-core processor ripping through the files in no time at all.
Moving or backing up the data to other systems or external drives over the network or an external bus is another story. 'Painfully slow' describes the situation the best. What’s more, there doesn’t seem to be anything affordable on the horizon that’ll make things go faster, not for a while at least.
For example, the 802.11n Wi-Fi connection in the computer reckons it’s talking to the router at 270Mbit/s which is amazingly good. Unfortunately, that translates into real-life performance of 95-110Mbit/s which is horribly inefficient. Move away from the Wi-Fi access point or have other devices join in and share the bandwidth and the speed halves.
The spotty performance of Wi-Fi today doesn’t fill me with confidence that new wireless technologies such as 60GHz WiGig will deliver anything near the promised 6-7Gbit/s. Besides, with a reach of a mere ten-fifteen metres and the high signal frequency providing poor penetration of walls and objects, it’ll be curious to see how useful the likes of WiGig are in reality.
Going cabled via a USB 2.0 external drive isn’t much better. The over-optimistic 480Mbps including overhead that the technology promises shrinks down to 25-35Mbyte/s and that’s a best-case scenario as well. The new backwards-compatible USB 3.0 standard promises a dizzying 5Gbps peak speed and you can expect something like 200Mbyte/s throughput with fast devices and ideal conditions.
This is good stuff, as it’s almost twice as fast as 1Gbit/s Ethernet. However, it’s not as quick as even a 3Gbit/s SATA port, let alone the new 6Gbit/s ones, and USB 3.0 drives are still fairly rare and small (I don’t have one).
I suspect some of the above were the reasons behind Apple’s decision to use Intel’s new Thunderbolt interface, which provides 10Gbit/s speeds. Not just in one direction, but both up and down, for a total of 20Gbit/s. Plus, you can run displays off a Thunderbolt cable. That requires active cabling with transceivers at each however, and so far the only major manufacturer using Thunderbolt is Apple. Nobody else seems interested, which is a shame as Thunderbolt speeds are what we should be expecting right now.
While I am totally impressed that Intel’s getting 20Gbps out of copper over three metres, was more interested in Thunderbolt’s first incarnation, Light Peak. At the Intel Developers Forum demo some two years ago, Light Peak was shown off pushing data and high definition video streams over thirty metres of optical connection.
In optical form, Light Peak is supposed to scale up to 100Gbps. Couple that with the possibility to run long cable lengths and you have, in my mind at least, a winner. Maybe an expensive one but dammit, would you say no to 100Gbit/s transfer rates?
Ideally though, I’d like to keep everything networked and not futz around with external devices. My LAN’s been gigabit-ised since 2004 and everyone’s network should be at least that fast in 2011. When I first installed gigabit Ethernet NICs, motherboard chip sets simply weren’t fast enough to drive them, ditto hard drives. It’s not until lately with gigabit Ethernet interfaces being connected to PCI-Express buses and fast drives that I’m able to hit 110-120Mbyte/s.
Now, if I could upgrade to 10Gbit Ethernet, and scale performance accordingly, I’d be happy and have some margin for the future. In fact, I expected to be at 10GE by now, but that technology still isn’t at affordable levels even though the specification was fixed in 2006.
Installing 10GE would give the same reach as 1GE, or around 100 metres over copper unshielded twisted pair albeit with new, Category 6A cabling that looks like a pain to install. Cat 6A cable also happens to be really expensive, costing up to three times as much as Cat 5E which I normally use. Switches and NICs are also very pricey, and Intel is doggedly sticking with 1GE on their roadmaps, possibly because it’d be difficult to furnish the full 1.25Gbyte/s that 10GE can handle at consumer pricing levels.
What this means is that 10GE isn’t likely to come out of the data centre any time soon, leaving our LANs as the bottleneck. Given that we store more and more data on our terabyte and bigger drives and want to share that, this is something vendors should remedy sooner rather than later.