1. TechSpot is dedicated to computer enthusiasts and power users. Ask a question and give support. Join the community here.
    TechSpot is dedicated to computer enthusiasts and power users.
    Ask a question and give support.
    Join the community here, it only takes a minute.
    Dismiss Notice

The internet is evolving: HTTP will no longer use TCP

By mongeese · 18 replies
Nov 18, 2018
Post New Reply
  1. HTTP v1, v1.1 and v2 have all used TCP because it’s been the most efficient way to incorporate reliability, order and error-checking into Internet Protocol (IP). In this case, reliability refers to the server’s ability to check if any data was lost in the transfer, order refers to if the data is received in the order it was sent and error-checking means the server can detect corruption that occurred during transmission.

    As Ars Technica notes, UDP is substantially simpler than TCP but doesn’t incorporate reliability or order. But TCP isn’t perfect either, being a one size fits all solution for data transfer and thus including things HTTP doesn’t need. Google has managed to remedy this situation by developing Quick UDP Internet Protocol (QUIC), a protocol base for HTTP that maintains the simplicity of UDP but adds the couple things that HTTP needs, such as reliability and order.

    This should, theoretically, improve stability and speed. For example, when establishing a secure connection between client and server, TCP has to make multiple round trips to establish a connection and only after can the Transport Layer Security (TLS) protocol make its trips to establish an encrypted connection. QUIC can do both of these simultaneously, reducing the total number of messages.

    The Internet Engineering Taskforce (who are responsible for establishing internet protocol) has just recently approved the use of QUIC and have named it HTTP/3. They’re currently establishing a standardized version of HTTP-over-QUIC, and it’s already supported by Google and Facebook servers.

    Permalink to story.

     
  2. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    Hmm; in lightly loaded networks this may be fine, but when tracert or pathping shows lost packets in heavily used servers, there will be consequences to those lost packets the TCP would have ensured to be resent.

    This will be interesting to watch ...
     
  3. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    One other thought:

    All those connections via TLS are a one-time connection 'cost' and the larger the file being transferred, the less significant it becomes. As many websites deliver large HTML content, it becomes meaningless.

    A far better optimization would to concatenate (aka glue together at runtime) all the CSS into one package and the JS scripts into another, and thus push three files ( html, css, js) rather than many individual small files -- but that requires skill on the part of the webmaster.
     
  4. S1lence

    S1lence TS Rookie

    Wasn't there something recently about a botnet that exploits UDP in routers? Now UDP is going to replace TCP/IP?
     
  5. Right side bob

    Right side bob TS Booster Posts: 102   +24

    I bet it will less secure due to newness its gonna be the new hacker craze. Why cant they announce this stuff privately.
     
  6. waterytowers

    waterytowers TS Booster Posts: 112   +17

    I'll read the spec before I judge. You would have to assume it is going to be secured traffic since Google has been an early adopter for dropping insecure protocols from chrome.
     
  7. Badvok

    Badvok TS Maniac Posts: 261   +120

    Have a look at HTTP v2 multiplexing, or in other words: get up to speed on existing technologies before trying to understand new stuff.
     
    slamscaper and Plutoisaplanet like this.
  8. Badvok

    Badvok TS Maniac Posts: 261   +120

    Security through obscurity isn't.
     
  9. Badvok

    Badvok TS Maniac Posts: 261   +120

    No, QUIC over UDP/IP is going to replace HTTP over TCP/IP. It has nothing to do with router UDP exploits, it is a fully fledged protocol with security built in.
     
  10. amstech

    amstech IT Overlord Posts: 1,991   +1,174

    I don't think that applies here.
    'QUIC' if I am reading it correctly, is just cutting down on the re-directions/response time, its not making it any less reliable.
    Ever trace packets before? I used to do it in College with Etherpeek, its crazy where some things go, even across the world and back, when that primary end result server might only be 10 miles away.
     
  11. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    UDP, by it's very nature does not ensure delivery as there is no ACK response -- the sender just spews packets at the recipient. For some data types this is just fine.

    Yes tracert (the windows version of the linux traceroute) shows lots of interesting pathing.
     
  12. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    @amstech btw, PATHPING is a very neat windows cmd that will pinpoint WHERE in the path packets are getting lost
     
    amstech likes this.
  13. slamscaper

    slamscaper TS Addict Posts: 218   +44

    It's about time someone streamlined an IP protocol for HTTP. TCP is old and while reliable, it's far too overdeveloped for HTTP. I always wanted to see some modified form of UDP instead. This will be like UDP with packet loss protection and TLS. Can't wait to use it more.

    Honestly guys, if you have any knowledge about networking and how the stack works, you'd know this is a very good thing. TCP has too much overhead right now for HTTP and considering how content rich websites have become with modern CSS and HTML5 streaming video, we need this.

    This will be good for security too. Less overhead means more CPU cycles for better encryption.
     
    Robinson Ochoa likes this.
  14. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    @slamscaper ::

    "Honestly guys, if you have any knowledge about HTTP and how a web server works", you would realize web response times is all about how the webmaster organizes the content for efficient delivery. Today a great many webmasters allow each resource (html, css, js, jpeg, ... ) to be independently sent to the demise of the site responsiveness. It is possible to make four replies, for for each type, and reduce response time by more than 80%. The issue is to reduce the number of connections required to deliver ALL the content.

    For example, it takes 69 requests totally 1.95mb to deliver just this page.
    (used FF Web Development tools to trace all objects)
     
    Last edited: Nov 20, 2018
  15. slamscaper

    slamscaper TS Addict Posts: 218   +44

    This is about the transport layer rather than the application layer. Web admin's will see benefits whether they are well organized or not.
     
  16. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    That's absolutely correct -- but the optimization there is minute compared to what the application can do by just reducing the number of objects being delivered.
     
  17. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    You can do the math: 4 optimized web deliveries vs 69 raw = 93.3% reduction even at the transport layer :grin:
     
    Last edited: Nov 22, 2018
  18. takemaru

    takemaru TS Member

    It's a new protocol based on UDP, called QUIC. An improved UDP protocol. It's likely it will have it's own set of weaknesses.
     
  19. jobeard

    jobeard TS Ambassador Posts: 12,354   +1,386

    "Just because you can, doesn't mean you should"
    This is a proposal for the use of HTTP/3 - - the question will be the rate of adoption. UDP by it's vary nature imposes extra work for the application (in this case the browsers). Reassembly and retry for missing packets will far outshaddow the gain from the initial connection improvement.
     
    Last edited: Nov 26, 2018

Similar Topics

Add your comment to this article

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...