Even hard drives will get to use the super-fast NVMe 2.0 interface

Molematt

Posts: 36   +2
Forward-looking: It's not going to make the average hard drive any faster, but adding support for them in the NVMe specification paves the way for speedier ones coming down the pipeline, and for storage as a whole to finally move on from SATA after two decades.

The 2.0 revision of the NVMe (non-volatile memory express) standard brings with it new functionality and improved performance as expected, but it also brings support for the humble hard drive.

The SATA III interface currently used by all HDDs and many SSDs is showing its age more and more by the day. Last significantly updated in 2008, its maximum throughput of 600 MB/s has become a performance bottleneck for SSDs, while the NVMe spec allowed them to reach their full speeds via the high-bandwidth PCIe interface.

Now, NVMe is adding support for "rotational media" (or hard drives, to you and me) as well. Current HDDs are still limited by the actuation speed of the read and write arms within the drive itself -- most are still far from saturating a SATA III interface the way an SSD can. Then again, some like Seagate's new Mach.2 can come close, with its up to 524 MB/s of sequential transfers impressive for so-called "spinning rust," and even treading on the toes of budget SATA SSDs.

As HDD sizes continue to balloon in response to server and datacenter demands, dual-actuator drives like the Mach.2 could grow more common, but for consumers, the most tangible benefit is going to be the simplification of storage devices to a single solution. Between the 2.0 revision adding support for hard drives, and its overhaul into a modular specification, the clear intention is for NVMe to become the universal interface for storage drives, unifying interfaces and perhaps making more room available on ever-crowded consumer motherboards.

Then again, as much as the NVMe standard is preparing for "Life After SATA," it's likely to be a while until HDDs bearing the interface start shipping and selling in volume, and longer still until they start fully replacing their SATA counterparts in the consumer space.

The NVMe 2.0 revision also introduces a number of SSD-specific features primarily aimed at improving control, endurance and overhead, and of particular note is the introduction of Zoned Namespaces (ZNS), allowing for both drive and host to decide on the physical placement of data on the drive to help increase capacity and performance. And, as expected, it'll remain backwards-compatible with earlier generations of the specification.

Permalink to story.

 
I'd be curious to know if this would offer any advantages to a typical SAS with an array of drives. I'd be nice to not maintain separate connectors but I am guessing that to move drives away from SAS they would have to be some key advantage to this? Or maybe I legit missed it on the article idk.
 
That, to me, is akin to "processors from 90s will be able to use DDR5 RAM". Sure, great, but to what end?!
 
What's the point? HHDs can't max out SATA speeds, why waste NVME ports and PCIe lanes on them? Those that come close use cleaver tricks and can only do it for short bursts

If anything this just adds unneeded complexity to the NVME spec. "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
 
If anything this just adds unneeded complexity to the NVME spec. "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
My biggest thing isn't so much what you're saying, although that's a very good point. I feel that with the way desktop CPUs limit PCIe lanes that HDDs are just a waste of PCIe especially when they have no need for it.

As I was thinking about this later, it occurred to me that NVME also supplies power. Simply from an engineering standpoint, I don't like have all this power running through the motherboard. The motherboard is becoming a single point of failure for so many things. And it's not just that, the amount of power. On a laptop or where space is limited the argument could be made, but as far as desktops are concerned I'd much rather a hands-off approach. Let the motherboard do its thing and let the powerfully do its thing. Which is, you know, supply power
 
I don't like have all this power running through the motherboard. The motherboard is becoming a single point of failure for so many things. And it's not just that, the amount of power.

This. I already feel pretty uncomfortable having three case fans (plus the CPU fan) drawing power from the motherboard on my new rig I recently built (I've tried 2 different PWM hubs that draw power from the PSU but none of them worked well, what I found out is that case fan controls lack well defined standards). I'd say that this is something that should never have existed in the first place - fan connections in motherboards should exist only for monitoring and controlling fan speeds without any power draw, which should always be provided by a second connector to the PSU. I'd say even CPU fans should have always worked like that by standard.
 
This is not for any of you self-centered consumers. This is for the servers everything on the internet runs on, which still use HDD's for cold data. Now they can use PCIe and NVMe to access their entire arrays of Optane drives, SSD's and HDD's.
 
This is not for any of you self-centered consumers. This is for the servers everything on the internet runs on, which still use HDD's for cold data. Now they can use PCIe and NVMe to access their entire arrays of Optane drives, SSD's and HDD's.
They already use the SAS interface in servers. I guess they could replace SAS with NVMe if they wanted to but that's also pointless.
 
Unless manufacturers like AMD and Intel stop product segmentation with PCIe lanes then it won't matter. But of course they won't.

I know many people are not solely on flash powered storage, but adding ancient history devices to NVMe is not an improvement. It's an regression. It doesn't matter for a physical spinning rust harddrive if it is connected directly to CPU via NVMe or via DMI and chipset or via external RAID card over PCIe slot.

I would gladly see x2 lanes NVMe adopted. With Gen 4 and soon 5, wasting x4 lanes is criminal. Yes servers and super computers, that's for them, but nobody in-house can achieve and sustain those illusory benchmark numbers at QD32 or higher. x2 Gen4 or 5 is ridiculously fast. Even old intel 750 could record multiple 4K streams and broadcast at same time.

Biggest problem with NVMe is creating push for worse and worse NAND types not specification itself. In that respect I'm happy that all of my SATA SSDs which I have are MLC based and will live for years. Recently had to migrate my system drive to old, indestructible 950Pro because 970 EVO controller (not NAND) crapped itself out with CRC checksum errors. It wouldn't be a big story, but I had so much very expensive licenses and activations that installing everything from scratch was never an option on main work machine. Fortunately everything ended just fine. Screw TLC, QLC and everything that comes afterward. Period.
 
I wonder how are those drive going to be connected to those nvme ports?
Oh, that's easy. You'll need a $50.00 adapter for each drive.

And/or, CPU and mobo manufacturers will have to put enough Pci-e lanes in to replace the standard 6 drive SATA array, while making provisions for attaching HDDs directly to the board in a vertical mounting arrangement.
 
Biggest problem with NVMe is creating push for worse and worse NAND types not specification itself. In that respect I'm happy that all of my SATA SSDs which I have are MLC based and will live for years. Recently had to migrate my system drive to old, indestructible 950Pro because 970 EVO controller (not NAND) crapped itself out with CRC checksum errors. It wouldn't be a big story, but I had so much very expensive licenses and activations that installing eve
TBH, if you haven't installed all your licensed software on the original C;/ drive, and then migrated it to the new SSD, you haven't thought the process out thoroughly enough.

Because when something screws up, all you have to do, is slap the old drive back into the machine.
 
So....does this mean a NVMe cable is coming? I fail to see how NVMe can replace SATA unless there is some cable method of connecting the drives, motherboards are not big enough to mount HDDs directly to the board....
 
Oh, that's easy. You'll need a $50.00 adapter for each drive.

And/or, CPU and mobo manufacturers will have to put enough Pci-e lanes in to replace the standard 6 drive SATA array, while making provisions for attaching HDDs directly to the board in a vertical mounting arrangement.
Thats one gigantic and expensive mess then.
A ribbon cable.

You thought you were safe when SATA deprecated IDE, bwahahahaha!
You joke, but it looks like it will be that way!

Man, I remember how good I got at folding those ribbon cables, making then "thing" and manageable.

Good ol'days.
 
Thats one gigantic and expensive mess then.

You joke, but it looks like it will be that way!

Man, I remember how good I got at folding those ribbon cables, making then "thing" and manageable.

Good ol'days.
It figures that the one thing I disliked most from 90s tech - giant ribbon cables - are going to be the one thing making a comeback.

As for NVME, I can appreciate that there may be games coming down the pipe that do utilize the kind of bandwidth it provides, but besides that, video editing and Chiacoin plotting it just seems excessive, but then I guess I do have patience born of the days when the boot process took minutes and OS installs some hours.
 
To everyone asking "why do this?":

This lets you delete the SATA controllers from the motherboards, reducing the complexity and cost of the boards, and freeing up both real estate and PCIe bandwidth for other uses.

Rust spinners aren't going anywhere anytime soon, so we might as well get everything onto the same standard. Even if that newer standard is overkill in this case.
 
Back