The internet is evolving: HTTP will no longer use TCP

mongeese

Posts: 643   +123
Staff
Why it matters: HyperText Transfer Protocol (HTTP) is the system that web browsers use to talk to servers, and it’s built using Transmission Control Protocol (TCP). TCP has many features that make it attractive for HTTP, but it also includes a lot of excessive code. By ditching it for the simpler User Datagram Protocol (UDP) and then adding back what HTTP needs, transmission can be smoother and faster.

HTTP v1, v1.1 and v2 have all used TCP because it’s been the most efficient way to incorporate reliability, order and error-checking into Internet Protocol (IP). In this case, reliability refers to the server’s ability to check if any data was lost in the transfer, order refers to if the data is received in the order it was sent and error-checking means the server can detect corruption that occurred during transmission.

As Ars Technica notes, UDP is substantially simpler than TCP but doesn’t incorporate reliability or order. But TCP isn’t perfect either, being a one size fits all solution for data transfer and thus including things HTTP doesn’t need. Google has managed to remedy this situation by developing Quick UDP Internet Protocol (QUIC), a protocol base for HTTP that maintains the simplicity of UDP but adds the couple things that HTTP needs, such as reliability and order.

This should, theoretically, improve stability and speed. For example, when establishing a secure connection between client and server, TCP has to make multiple round trips to establish a connection and only after can the Transport Layer Security (TLS) protocol make its trips to establish an encrypted connection. QUIC can do both of these simultaneously, reducing the total number of messages.

The Internet Engineering Taskforce (who are responsible for establishing internet protocol) has just recently approved the use of QUIC and have named it HTTP/3. They’re currently establishing a standardized version of HTTP-over-QUIC, and it’s already supported by Google and Facebook servers.

Permalink to story.

 
Hmm; in lightly loaded networks this may be fine, but when tracert or pathping shows lost packets in heavily used servers, there will be consequences to those lost packets the TCP would have ensured to be resent.

This will be interesting to watch ...
 
One other thought:

All those connections via TLS are a one-time connection 'cost' and the larger the file being transferred, the less significant it becomes. As many websites deliver large HTML content, it becomes meaningless.

A far better optimization would to concatenate (aka glue together at runtime) all the CSS into one package and the JS scripts into another, and thus push three files ( html, css, js) rather than many individual small files -- but that requires skill on the part of the webmaster.
 
Wasn't there something recently about a botnet that exploits UDP in routers? Now UDP is going to replace TCP/IP?
 
I bet it will less secure due to newness its gonna be the new hacker craze. Why cant they announce this stuff privately.

I'll read the spec before I judge. You would have to assume it is going to be secured traffic since Google has been an early adopter for dropping insecure protocols from chrome.
 
One other thought:

All those connections via TLS are a one-time connection 'cost' and the larger the file being transferred, the less significant it becomes. As many websites deliver large HTML content, it becomes meaningless.

A far better optimization would to concatenate (aka glue together at runtime) all the CSS into one package and the JS scripts into another, and thus push three files ( html, css, js) rather than many individual small files -- but that requires skill on the part of the webmaster.
Have a look at HTTP v2 multiplexing, or in other words: get up to speed on existing technologies before trying to understand new stuff.
 
Wasn't there something recently about a botnet that exploits UDP in routers? Now UDP is going to replace TCP/IP?
No, QUIC over UDP/IP is going to replace HTTP over TCP/IP. It has nothing to do with router UDP exploits, it is a fully fledged protocol with security built in.
 
Hmm; in lightly loaded networks this may be fine, but when tracert or pathping shows lost packets in heavily used servers, there will be consequences to those lost packets the TCP would have ensured to be resent.

I don't think that applies here.
'QUIC' if I am reading it correctly, is just cutting down on the re-directions/response time, its not making it any less reliable.
Ever trace packets before? I used to do it in College with Etherpeek, its crazy where some things go, even across the world and back, when that primary end result server might only be 10 miles away.
 
I don't think that applies here.
'QUIC' if I am reading it correctly, is just cutting down on the re-directions/response time, its not making it any less reliable.
Ever trace packets before? I used to do it in College with Etherpeek, its crazy where some things go, even across the world and back, when that primary end result server might only be 10 miles away.
UDP, by it's very nature does not ensure delivery as there is no ACK response -- the sender just spews packets at the recipient. For some data types this is just fine.

Yes tracert (the windows version of the linux traceroute) shows lots of interesting pathing.
 
It's about time someone streamlined an IP protocol for HTTP. TCP is old and while reliable, it's far too overdeveloped for HTTP. I always wanted to see some modified form of UDP instead. This will be like UDP with packet loss protection and TLS. Can't wait to use it more.

Honestly guys, if you have any knowledge about networking and how the stack works, you'd know this is a very good thing. TCP has too much overhead right now for HTTP and considering how content rich websites have become with modern CSS and HTML5 streaming video, we need this.

This will be good for security too. Less overhead means more CPU cycles for better encryption.
 
@slamscaper ::

"Honestly guys, if you have any knowledge about HTTP and how a web server works", you would realize web response times is all about how the webmaster organizes the content for efficient delivery. Today a great many webmasters allow each resource (html, css, js, jpeg, ... ) to be independently sent to the demise of the site responsiveness. It is possible to make four replies, for for each type, and reduce response time by more than 80%. The issue is to reduce the number of connections required to deliver ALL the content.

For example, it takes 69 requests totally 1.95mb to deliver just this page.
(used FF Web Development tools to trace all objects)
 
Last edited by a moderator:
This is about the transport layer rather than the application layer. Web admin's will see benefits whether they are well organized or not.
 
That's absolutely correct -- but the optimization there is minute compared to what the application can do by just reducing the number of objects being delivered.
 
You can do the math: 4 optimized web deliveries vs 69 raw = 93.3% reduction even at the transport layer :grin:
 
Last edited by a moderator:
Wasn't there something recently about a botnet that exploits UDP in routers? Now UDP is going to replace TCP/IP?
It's a new protocol based on UDP, called QUIC. An improved UDP protocol. It's likely it will have it's own set of weaknesses.
 
"Just because you can, doesn't mean you should"
This is a proposal for the use of HTTP/3 - - the question will be the rate of adoption. UDP by it's vary nature imposes extra work for the application (in this case the browsers). Reassembly and retry for missing packets will far outshaddow the gain from the initial connection improvement.
 
Last edited by a moderator:
Back