For 25 years, the USB port has been a faithful old friend, connecting and powering our everyday gadgets and peripherals. All we've ever needed to do was plug them in and watch it all magically work. The sockets have changed over time, but no matter what you plug, the host computer always seems to know what the device is.
But how exactly does that happen? How does it know when a mouse has been connected, and not a printer? What's the difference between USB 2.0 and USB 3.2 SuperSpeed?
Welcome to our explainer on the inner workings of USB, and a look at how it's managed to survive for so long when others have come and gone.
Plug and Pray
To begin our story, we need to go back to the early 1990s, just before USB appeared. This was a time when Pentium was the buzzword of choice, Windows was 3.1, and personal computers were boring beige boxes. Wireless connections and cloud services weren't yet available, so printing, copying photos, or using external storage all required to be physically hooked up to a computer.
Unlike PCs today, machines from 30 years ago sported a glut of wildly different sockets and communication systems. Connecting peripherals and devices to such computers was often a frustrating experience, thanks to the oddities and limitations of each interface.
Mice and keyboards almost always used the serial PS/2 port, with each one having a dedicated 6-pin socket. Printers and scanners hooked up to a parallel port, via a 25-pin connector, and everything else via the classic serial port.
What if you accidentally stuck a mouse into the socket for the keyboard? It just wouldn't work, as the PC wouldn't know that the wrong device had been plugged in. In fact, none of these interfaces could identify what the device was: essentially, you'd have tell the computer what it was and manually install the right drivers for it.
If all went well, with a bit of luck and a quick reboot after the driver install, that was all you needed to get things working. More often than not, though, PC users were required to delve into the depths of Windows' Control Panel or the motherboard BIOS, to get it all running smoothly.
Naturally, consumers wanted something better: 'one port to rule them all,' so to speak. A socket that you could plug devices in and out of, without having to restart the machine, and devices could be instantly recognized and configured for you.
System vendors wanted something more universal as well, to replace the need for lots of different sockets, and be cheaper to produce. It would also need to have the scope to be developed and improved over the years, all while retaining backwards compatibility.
So, not asking for much then.
A Rare Moment of Unity
Occasionally, in the world of computing, the planets align and set in motion a period of harmonious productivity, to the benefit of everyone. In 1994, such an event took place when Intel, Microsoft, IBM, Compaq, DEC, and Nortel formed a consortium, agreeing that the time was right to create a new connection system, that would meet everyone's desires and needs.
It was Intel who took the lead with the technical development, with Ajay Bhatt becoming the primary architect of the project -- he would go on to do the same for AGP (Accelerated Graphics Port) and PCI Express. Within a space of just two years, a full specification was published, along with the chips to control it all.
And thus was born the Universal Serial Bus - a replacement for the serial, parallel, and PS/2 ports. It boasted a clean, simple design and offered lots of performance. The uptake of the new system was slow at the beginning though, and it wasn't until version 1.1 was released in 1998 that things really took off.
The changes in the revision were fairly minor, mostly concerning power management and device compatibility, but that wasn't what kickstarted USB adoption. Instead, it was Microsoft adding USB 1.1 support into Windows 95, via an update in the Fall of 1997.
There was also Microsoft's heavy marketing of the phrase "Plug and Play" -- a design philosophy and system requirements for PCs that had the goal of removing the complexity of setting up computers and peripherals. While not the most robust of systems, USB was a perfect poster child for it.
But the biggest advert for USB came about through Apple's decision to wholeheartedly jump on board, with the release of a product that would shake up the whole PC industry.
Launched in August 1998, the original iMac was bright and bold, and one of the first so-called 'legacy-free' PCs. This term was used to indicate that the machine eschewed all of the old ports and devices: everything in it would be the latest hardware. Although it wasn't a hit with the critics to begin with, it went on to sell in huge numbers -- its popularity put USB well and truly on the map, although it would be quite a few years before Windows-based computers were sold without any concessions to the ports of the past.
The USB specification went on to have several revisions, with the major ones being 2.0 in 2001, 3.0 in 2008, and the very latest spec (4.0) released last year. But we'll come back to that later on. For now, let's take a look at how the Universal Serial Bus actually works and what makes it so much better that the systems it replaced.
It's Only Simple on the Outside
Let's start by having a look at the overall layout for the connections in a typical PC.
The image below shows how various devices in an Intel X299 Skylake-X system communicate with each other:
You can see the USB sockets in the lower left section of the diagram, and they're connected directly to what Intel calls a PCH: the Platform Controller Hub. In the days when USB first appeared, this chip was typically called the Southbridge, and it managed the flow of instructions and data to components such as hard drives, network adapters, audio chips, and so on.
The PCH still performs the same role, although now it has more things to take care of. As a quick aside, AMD Ryzen CPUs actually handle these tasks directly: they don't need a PCH/Southbridge, although most Zen motherboards come with an extra controller, to offer more ports and sockets.
Deep inside the silicon guts of the X299 chip is a section called the USB host and it contains two key elements: a USB controller and a root hub. The former is a small processor that issues all of the instructions, manages power delivery, and so on. Like all such integrated circuits, it needs drivers to function, but these are nearly always built into the operating system.
The root hub is the primary stage for connecting USB devices to the computer, but not every system is setup this way. Sometimes devices are attached to other hubs, which in turn daisy-chain their way back up to the USB host (the green box at the top of the image).
The latest specification allows up to 5 hub chains and while this might not sound like much, the same standards also state that a single USB controller must support up to 127 devices. Need more? Then just add in another controller -- something which is actually a default requirement in the USB 3.0 standard.
Hubs and devices talk to each other through a set of logic pipes, with each attached peripheral having a maximum of 32 communication channels (16 upstream, 16 downstream). Most just use a handful, though, and they're enabled as and when they're required.
Pipes can be simply classified by what they're doing: sending/receiving instructions or transmitting data. In the case of the latter, the logic system used only sends in one direction, whereas instructions are always two-way.
A USB scanner, for example, would only be sending data to a hub, whereas a printer would only ever receive it. Hard drives, webcams, and other multi-function devices do both, and so will have more active pipelines working away.
So how is all of this information transmitted?
In the case of USB 1.0 through 2.0, it's done using just 2 wires, which is notably fewer than the likes of the old parallel port.
Connectors of this specification contain 4 pins: one for 5 volt power, two for data, and a ground. The 5 V pin supplies all of the current needed to operate the electronics in the connector and the device itself, up to the following limits:
- USB 2.0 = 2.5 W
- USB 3.0/3.1 = 4.5 W
- USB 3.2/4 = 7.5 W
These limits can be bypassed with USB 2.0 or higher, via Battery Charging or Power Delivery modes. When used like this, no data can be transferred, but significantly more power can be supplied -- something that the old ports could never do.
The data lines work as a differential pair -- the pattern of voltages across them provides the host controller with the flow of bits. When a device is plugged into a USB socket, the controller picks up a change in voltage across one of the data pins and this starts a process called device enumeration. This begins by resetting the peripheral, to prevent it from being in an incorrect state, then all of the relevant information (type of device and maximum data speed, for example) is read by the controller.
USB devices fall into one of many categories, and each one has a set code -- Bluetooth adapters, for example, fall into the Wireless Adapter category, whereas a steering wheel with force feedback is a Physical Interface Device.
One very important group is the Mass Storage class. Initially set up for external hard drives and the likes of CD burners, it has been expanded over the years to include flash memory sticks, digital cameras, and smartphones -- the latter has seen a huge growth in storage capacity and typically use a USB connection to transfer files to a computer.
Only one device can be managed at a time (hence why it's a serial bus), but the controllers can switch between them very quickly, giving the impression that they're all being handled at the same time. And while the bus is not as fast as the SATA interface, for example, computers using USB drives can boot from them, as well as running portable applications off the device, without the need to ever install them.
And speaking of speed, let's dig into that aspect of the communication system.
Ever Evolving, Ever More Confusing
In the early drafts of the USB 1.0 specification, the data lines in the interface were designed to operate at just one speed: 5 MHz. Since the lines work as a pair, the bus itself is 1 bit wide, giving a maximum bandwidth of 5 Mbits per second (or 640 kB/s).
This was a vast improvement on the venerable serial port, but less than what could be achieved with the parallel port, when configured in ECP mode (20 Mbits/s). However, at the time, this speed would have excluded a lot of very simple devices, such as mice and keyboards, so the spec was expanded to work at two clock rates, giving data rates of 1.5 Mbits/s or 2 Mbits/s. With no measure of artistic licence spared, the designers labelled these as Low Speed and Full Speed.
When USB 2.0 was finalized in 2001, the bus offered a much needed higher clock rate, giving a peak of 480 Mbits per second of bandwidth -- and what's faster than 'full speed'? High Speed, of course. This naming confusion reached its zenith when version 3.0 appeared 7 years later.
The two data lines of old had reached their maximum capability, and the only way to continue to improve the bandwidth was to add more pins. The original USB design had such changes in mind, which is why the socket is relatively roomy and free of clutter.
These extra pins allowed data to flow both ways at the same time (i.e. duplex mode) and gave a theoretical peak bandwidth of 5 Gbits per second -- over 400 times more than the original specification. And since these lanes sat in the space above the old ones, USB 3.0 retained full backwards compatibility.
Then things started to get rather silly...
Version 3.1 rolled out in 2013, boasting faster data lanes (10 Gbits/s), but for some reason, this revision was labelled USB 3.1 Gen 2. Why 2nd generation? Because 3.0 was renamed to 3.1 Gen 1.
When the USB 3.2 specification arrived 5 years later, the organization that helps set out and agree on USB standards, decided that 3.2's even greater capability (up to 20 Gbits/s) required another renaming:
- USB 3.1 Gen 1 --> USB Gen 3.2 1x1
- USB 3.1 Gen 2 --> USB Gen 3.2 2x1
And the new system had two versions on top of all this: Gen 3.2 1x2 and 2x2, where two sets of data lines are used in parallel. With so many different specifications and speeds available, you'd think that there would be a fixed standard to help identify things. But you'd think wrong -- take a look at this backplate on a Gigabyte motherboard:
There's a total of 10 USB ports, covering two different versions of the 3.2 specification and two types of connectors (more about this shortly). Neither the color coding nor Gigabyte's own website tells you exactly which revision it is -- they're all marked as being USB 3.2, but why are some blue and others red?
There are official logos that manufacturers can use to indicate which version it is, but since their use isn't enforced in any way, they rarely get used. And another renaming exercise last year (where manufacturers were recommended to use SuperSpeed USB 5 Gbps, SuperSpeed USB 10 Gbps, and so on) only highlighted just how confusing USB had become.
When USB4 (that's not a typo, it's honestly not USB 4.0) was launched in 2019, there was hope that matters would be made a lot clearer. Sadly the lack of clarity about speed ratings and labels continued. If anything, it actually got mildly more confusing, as it was quickly announced that Thunderbolt 3 would be integrated USB4 -- effectively become the same thing (baring a few additional tweaks for the latter).
The first products on the market to openly support USB4 (and at the same time handling Thunderbolt 3 and USB 3.1 Gen 2) were, naturally, from Apple. Namely the first Macs to be powered by its in-house M1 SoC: the MacBook Pro 13, MacBook Air, and Mac mini. All three products sport two Type-C sockets that will automatically configure to the correct system, depending on what's attached to them.
And where Apple has led in the world of USB, others have rapidly followed.
Easy as A, B, C?
When USB was being designed, the engineers wanted to make the system as fool-proof as possible, removing the need to waste time trying to configure everything. This notion was carried through into the format for the sockets -- one shape was for the USB host and another for the device to be connected. They ultimately became known as the Type A and Type B connectors.
The idea behind this is that it would be clear to the user which end of a cable goes where. Unfortunately, the designers also wanted to the system to be as cheap as possible to implement, and the Type A's design can make it notoriously difficult to plug in sometimes.
Another issue with the very first generation of USB, is that the Type B plug was too bulky for small devices, such as media players and mobile phones. So when version 1.1 was released in 1998, shrunk versions were introduced, known as Mini-A and Mini-B. These were rapidly adopted by phones and tablets, although they also gained a reputation for being rather flimsy.
But even these were too big, once smartphone manufacturers began their quest for ever slimmer devices. USB 2.0 resolved this by not only offering faster speeds, but giving us the Micro-A and B connectors
USB 2.0 also offered the Micro-AB socket (which accepts micro-A and micro-B plugs) and then while USB 3.0's Type A was backwards compatible with USB 2.0, the Type B wasn't -- it physically couldn't fit into a 2.0 Type B socket -- although older cables could be plugged to USB 3.0 Type B connectors.
And for good measure, the same specification also had the somewhat bulky Micro-B SuperSpeed connector, defeating the whole purpose of it being 'micro.'
All of these changes came about in the hunt for ever more performance (you can clearly see the extra data pins in USB 3.0) and to appease the growing family of members in the steering group, known as the USB Implementers Forum (USB-IF).
The need for something better was obvious...
Manufacturers and consumers alike wanted a connector that was small, the same for host and device, and offer the scope for ever-better performance. And so along with USB 3.1 (which was developed separately), the USB-C plug was born.
Not only did it replace the requirement for distinct A/B sockets, it can also be inserted in any orientation, and be used for connection systems other than USB (such as DisplayPort, HDMI, and Thunderbolt).
The USB-C connector has considerably more data lines than USB 3.0 Type A (sorry, USB 3.2 SuperSpeed) -- two are dedicated entirely to USB 2.0 support, and four other sets of differential pairs provide two-way communication. These changes provide up to 40 Gbits/s of bandwidth in the most current specification.
With USB4, the ties to the old sockets was abandoned for good -- it's USB-C or nothing -- but it will be many more years before we say goodbye to Type A sockets on PCs and other devices.
Hello USB, My Old Friend
Next year, USB will turn 25 years old and while the latest version bears only a few similarities to the original design, its fundamental premise still applies: plug it in and the device will just work. Each specification revision has provided greater performance (version 4 is over 3,000 times faster than 1.1) and been able to supply more power to the devices (currently up to 100 watts, when used in power delivery mode).
But why or how has USB lasted so long? Is there nothing better, that could offer more bandwidth or power? The simple answer is not really, or at least, not anymore.
Ten years ago, Intel released Thunderbolt. At the time it seemed more appealing than USB 3.0, sporting more bandwidth and greater flexibility. As already mentioned, the latest version called Thunderbolt 3 now works as a superset of USB-C, dropping its original connector (Mini DisplayPort), and sporting the same maximum bandwidth as USB4. It offers more features, such as being able to supply more power to run a device, but instead of displacing USB, it's essentially being integrated into USB4.
There was also FireWire, which at some point offered better performance than USB 2.0 and supported full-duplex data transfers, but once USB 3.0 arrived and improved in many areas including performance, FireWire no longer offered any clear advantages nor was it as widely adopted to take over USB.
Part of the appeal of USB for system vendors and manufacturers lies in its relatively open specification. Unlike Thunderbolt and FireWire, it's possible to make a 'USB 3.2' cable and sell it as such, but not fully comply with all the details in the specs. For example, it might not support the full bandwidth or supply the maximum power available.
While this makes such products cheap to make and buy, it does mean that it's a potential minefield when it comes to getting the cable you actually need. The problem is further compounded by the fact that USB offers multiple transfer speeds and power modes -- something that's going to be the case for the foreseeable future.
But for all it's flaws over loose standards, confusing naming schemes, and multiple socket types, USB remains as ubiquitous as ever. Just about every computer peripheral uses it to hook up to the host machine -- even if it's wireless, it will almost certainly use a USB dongle.
But for all it's flaws over loose standards, confusing naming schemes, and multiple socket types, USB remains as ubiquitous as ever.
One day, USB may ultimately go the way of its predecessors, but its affordable, simple appeal and continued evolution will keep it going for now. A faithful old friend, indeed.
USB Shopping (of a few cool and affordable gadgets):
- 16GB USB Flash Drive for $4.99
- Sabrent 4-Port USB 3.0 Hub on Amazon
- Intel Core i9-10900K on Amazon
- 1TB Portable USB 3.0 Drive on Amazon
- Dual Port 24W USB Wall Charger on Amazon
- USB C to USB 3.0 Adapter on Amazon
- Miniature USB flash drive on Amazon