Coded TCP boosts wireless connection speed, eliminates packet loss

Shawn Knight

Posts: 15,289   +192
Staff member

Researchers at Caltech, Harvard and MIT have been working with colleagues in Europe to come up with a method of boosting wireless network performance. What they’ve created is a network that is 10 times faster than a typical modern network without having to add any additional transmission power, base stations or wireless spectrum.

Known as coded TCP, the protocol essentially makes packet loss a thing of the past. As you may already know, packet loss is hardly a concern over a wired network but once you cut the cord and transmit over the airwaves, it becomes a serious problem.

Whenever a packet is lost in transmission, the receiver has to contact the host to let it know the packet went missing. The receiver can’t do anything until the host resends the missing packet, ultimately creating longer ping times and a lower bandwidth connection.

With coded TCP, packet loss doesn’t affect transmission rates. The receiver doesn’t have to wait for the sender to resubmit the lost packet. Exactly how this takes place still remains a mystery as the team has already licensed the solution to multiple companies but the following explanation should give you a general idea of what’s going on.

increasing wi-fi packet loss algebra

Building on the description of packet loss above, a standard TCP link sends a constant stream of packets to the destination. Each packet contains a header with the destination IP address embedded. When the router receives the packet, it checks the IP address and sends it to the correct location. Once at the meeting place, the packets are reassembled to build the original file. As you can see, if a single file is missing, it disrupts the entire process.

Coded TCP bundles packets together and transmits them as an algebraic equation. In the event that a portion of a packet is lost, the receiver can simply “do the math” to determine what is missing and rebuild it, thus eliminating the need to wait for the sender to resend the missing bits.

MIT tested the solution on their campus and on a high-speed train with dramatic results. Campus Wi-Fi connection speed increased from 1Mbps to 16Mbps with typical two percent packet loss. On the train with five percent loss, speeds went from 0.5Mbps to 13.5Mbps.

With any luck, we’ll see this implemented in the real world sooner rather than later.

Permalink to story.

 
Wow different worlds. South African ISP's manage to make packet loss and high latency a way of life on wired networks. It's very rare to see a wired connection with under 3% loss (in data centres - never mind home connections)
 
Very interesting tech but the description "eliminates packetloss" is very misleading.
Sure it does but it adds overhead via parity data instead, more data will need to be transferred and the load on the receiving client will be higher aswell (proportionally to the packetloss)

It's like comparing RAID levels that have parity data vs those that do not.
RAID-0 gives 1TB usable space with 2x 500GB harddrives (But if one fails all data is lost)
RAID-5 gives 1TB usable space with 3x 500GB harddrives (But allows one drive to fail)

The RAID-5 example has a overhead of 33% and also needs allot of CPU cycles to calculate the parity date when reading or writing data. (Parity calculations can be offloaded to a dedicated CPU aswell but then that costs allot of money instead vs a software based solution)
 
I was thinking of emulation of hardware ECC, but that would certainly be CPU intensive to implement.
 
Very interesting tech but the description "eliminates packetloss" is very misleading.
Sure it does but it adds overhead via parity data instead, more data will need to be transferred and the load on the receiving client will be higher aswell (proportionally to the packetloss)
I wonder how significant the extra load can be, given the remarkable improvement in data throughput - campus wifi data rate up by a factor of 16, high-speed train data rate by 27 times. The advantages have got to outweigh the additional overhead on any but the most overloaded server, I'd think.
 
@TJGeezer
The overhead is probably not that bad, I just wanted to highlight that there is no way to both have the cake and eat it at once :)
Software RAID-5 can be done by modern computers and they can handle throughputs atleast a hundred times faster than 54G WifFi ;)
But as I said it would also be possible to implement this in hardware, and then there wont be any CPU overhead, only cost overhead when purchasing your next WiFi card :)
 
Back