MIT develops automated TCP algorithms, tripling internet speeds

David Tom

Posts: 149   +3

mit tcp internet bandwidth transmission control protocol

Without the Transmission Control Protocol (TCP), the web would cease to function properly. Acting much like a crossing guard, the TCP regulates internet traffic to ensure that online congestion is kept to a minimum. A number of different algorithms have been put to use over the years but ultimately they all share the same limitation: they are designed by humans and base decisions around in-built assumptions about the network.

According to PopSci, researchers at MIT have developed Remy, a computer program designed to produce congestion-controlling algorithms that adapt to the current usage scenario. The end result is an internet service that is two to three times faster than what is currently available today. In order to churn out such impressive speeds, the system makes use of several user-fed inputs. For example, Remy needs estimates on the required bandwidth, the number of simultaneous users, and how intensive the programs will be.

For the sake of time, Remy tries to focus on the most important network tweaks. Even by simplifying the algorithm process and prioritizing certain characteristics, the selection program can take anywhere from four to twelve hours to compute, and will spit out more than 150 if-x-then-y rules for operating.

Hari Balakrishan, an author of the MIT paper, explained the motivation behind the computer-driven system. “When you have even a handful of connections, or more, and a slightly more complicated network, where the workload is not a constant—a single file being sent, or 10 files being sent—that’s very hard for human beings to reason about. And computers seem to be a lot better about navigating that search space.”

So how effective is this MIT brainchild? Early studies show that the automated crossing guard increases throughput for a cell network by 30 percent, and delays are reduced by 25 to 40 percent. Although these figures are encouraging, we must remember that these tests have only been conducted in a lab setting.

Until Remy demonstrates success in the real open internet, the concept is little more than an intriguing hypothesis. Only time will tell it's true importance to the scientific community, as well as to the general public.

Permalink to story.

 
Now that is what I am talking about! It is crazy how one algorithm can change something so drastically.
 
Yep and you are getting shot at and remy says wait. I got to make the connection more efficient for a user is downloading something on the pirate bay and he told me to make you wait.
 
Amstech has a point for xfinity is not about your connection speed but your time to download when they throttle your a s s.
 
TCP Protocols have always be set to defaults and not to max. Out of the box all Windows versions have been this way.

Also Android base on older Linux TCP Protocols. So what are you suppose to do will I've fixed Android side on some ROM Called PowerCode I've created and on the Windows side we have JASPER Script for Windows 7 and 8.

Right now you can fix your internet connection for the speed your getting down by using a free app called TCP Optimizer. Also JetClean too can trigger the extra buffering needed.

If these guys from this MIT have changed the standards for TCP overhaul that's great but I has to be proven to work in all area's of Networking.

Right now the way things are 10mb was always 6mb as if the 100mb is 60mb and the 1000mb is 600 to 900mb. The 10gb is roughly measured by a factor of 7gb.

WiFi area is real miss in 802.11b, 802.11g, 802.11n for 2.4GHz, the 5.0GHz another area. Now with the draft release of 802.11ac, which is called VHT = very high throughput that opens the door for both Windows and Android in TCP/IP. Right now 802.11n which is called HT = high throughput what currently available on all Smart TV, Network Media Players, Desktops (with WiFi), laptops, netbooks and tablet.

I would welcome some changes in this area. Windows has TCP.sys limit of session connections set to 10 makes matters worst, so there are ways around that to match the how man Max Session Connections or known as MSC your current router can handle. Most can do 200 and above. Then you would se your TCP.sys using a special software (note software would backup current TCP.sys) before changing the settings to 200 or higher. Note doing so all Network PCs will suffer when this PC that has such higher TCP.sys set.

Most users can get away with a setting of 50.

Also keeping your system clean, (cache from the browser), registry and other areas will help aid you in a speedy system and network.

Been tweaking Networks since back in the old dial-up days and to what we have today current.
 
As long as TimeWarner has complete control of our county,
TW is not within the scope of the original article :sigh:
Taking your argument (ISPs impact user perceived bandwidth) at face value, ALL ISPs find it economically in their interest to control, throttle and use QoS to manage the user.

However, the article is still relevant to the backbone and all intermediate layers above the ISP - - how to get more from a fixed resource.

If you have an issue with TW, call customer service :)
 
ISP does control the throttling down but they're trying to increase what we get down. Comcast if you have 1 to 20 mb download is 250gb cap, 50 mb down has 350 gb cap 105 to 210 mb not suppose to be cap. But I haven't ran into any issues.

Looking at my account it said the Enforcement of the 250GB data consumption threshold is currently suspended.
 
Back