Backblaze data shows SSDs have lower failure rates than HDDs

Daniel Sims

Posts: 1,365   +43
Staff
The big picture: A key selling point of solid state drives is that they are less failure-prone than HDDs. However, data from recent years began to cast doubt on that assumption, but at least one new long-term analysis shows this might prove correct as time marches on.

Backblaze's 2022 mid-year review on storage failure rates shows that SSDs may indeed be more reliable than HDDs over time. The new report paints a better picture for SSDs than the same company's analysis from last year.

As a cloud storage and backup provider, Backblaze heavily relies on both SSDs and HDDs and it's been their custom to record and report on the drives' failure rates which are always interesting to follow. The periodic findings are putting solid-state technology's claims of superior durability over traditional disk drives to the test.

Last year's report showed similar failure rates for both types of storage, adjusted for age. Backblaze's HDDs and SSDs all saw failure rates increase by around one percent after a year of operation before leveling off in the subsequent two years. That's where the data for SSDs ended because Backblaze did not incorporate them into its analyses until 2018, while failure rate data for HDDs extends back to 2014.

The last data point for SSDs was interesting because year five is where HDD failure rates take a steep climb, reaching over six percent by year seven. Another year of findings would reveal whether SSDs would see the same increase in failures.

But so far Q1 and Q2 2022 data shows SSDs holding strong during their fifth year of service, creating a significant difference from HDDs. More data in the coming years will show us a clearer picture of SSD reliability advantages over hard disks, but certainly this new report suggests solid-state tech is more durable in the long run.

SSDs may not be uniformly better than HDDs, however. Last month, the University of Wisconsin-Madison and the University of British Columbia published a report claiming that SSDs could produce twice as many CO2 emissions as HDDs.

The findings have almost nothing to do with how consumers use their storage, as SSDs likely use less energy than HDDs. Rather, manufacturing SSDs is more energy-intensive than manufacturing hard drives. The study suggested increasing SSD longevity to solve the problem, so that manufacturers won't need to build as many drives. HDDs also certainly maintain their price-per-GB advantage over SSDs.

Permalink to story.

 
Good to know, I decided to switch only to SSD(s) 3 years ago, so it was worth investing in them.
 
While it would appear obvious that a device with no moving parts would outlast the one with moving parts, the only way to firmly establish would be to insure both devices were of equal quality at the start of the study. I have several HDD's in a server that are just over 10 years old, but these were "high end". I would like to see an honest evaluation of all SSD's with a rating on quality and put to the test to see which would fail and which would not.....
 
Would apricate more info on the testing and definition of "failure" is a HDD that works but is noisy classed as a failure?
 
The characteristics of failure with SSDs and HDDs are pretty distinct, though. As near as I can tell, when an SSD fails it fails with pretty much no warning - the difference between "working" and "not working" is basically 1 and 0. Forensic recovery from an SSD is more difficult even in comparison to the difficulties of recovering from a mechanical drive.

I've rarely seen an HDD, by contrast, that just goes from perfectly functional to complete paperweight. There is much more of a failure curve that you can observe with the device - mechanical and audio feedback and degrading performance that lets you know the drive is having issues. Likewise the process for forensic recovery, while by no means foolproof or easy, is very well understood.

There are ways to get some forewarning for both - CrystalDiskInfo or like apps come to mind - but for long-term storage I'd still prefer an HDD which is generally going to show signs of failure to me before it becomes unrecoverable, versus an SSD which is just going to be there one minute and gone the next.
 
Good to know, I decided to switch only to SSD(s) 3 years ago, so it was worth investing in them.

Damn you only made that switch 3 years ago?

I bought my first SSD an Intel G2 160GB drive in winter 2009 cost me $500

The characteristics of failure with SSDs and HDDs are pretty distinct, though. As near as I can tell, when an SSD fails it fails with pretty much no warning - the difference between "working" and "not working" is basically 1 and 0. Forensic recovery from an SSD is more difficult even in comparison to the difficulties of recovering from a mechanical drive.

I've rarely seen an HDD, by contrast, that just goes from perfectly functional to complete paperweight. There is much more of a failure curve that you can observe with the device - mechanical and audio feedback and degrading performance that lets you know the drive is having issues. Likewise the process for forensic recovery, while by no means foolproof or easy, is very well understood.

There are ways to get some forewarning for both - CrystalDiskInfo or like apps come to mind - but for long-term storage I'd still prefer an HDD which is generally going to show signs of failure to me before it becomes unrecoverable, versus an SSD which is just going to be there one minute and gone the next.

When SSD's fail they usually go into a read only mode. However if the controller fails thats when its get more complicated.
 
Damn you only made that switch 3 years ago?

I bought my first SSD an Intel G2 160GB drive in winter 2009 cost me $500



When SSD's fail they usually go into a read only mode. However if the controller fails thats when its get more complicated.
Well, to clarify, I completely switched to SSD(s) 3 years ago, since than had not any hdd in my comp. Was quite expensive than and deserved every "penny".
Nowadays I am thinking to wait 1 more year before buying a PCIe 5.0 SSD, due to prices.
 
Well, to clarify, I completely switched to SSD(s) 3 years ago, since than had not any hdd in my comp. Was quite expensive than and deserved every "penny".
Nowadays I am thinking to wait 1 more year before buying a PCIe 5.0 SSD, due to prices.
ahh I get ya.

I was primary drive / OS SSD since 2009 and still get 1 HDD in the rig for storage Since about 2019 I've gone all M2 and SSD and now have a NAS for the HDD's.

I won't be going PCIe 5.0 anytime soon.
 
ahh I get ya.

I was primary drive / OS SSD since 2009 and still get 1 HDD in the rig for storage Since about 2019 I've gone all M2 and SSD and now have a NAS for the HDD's.

I won't be going PCIe 5.0 anytime soon.
Yes, best to use HDD in NAS, externally. I have a couple of external HDDs for data backup. When I had mixed SSD with HDD in my comp I noticed that I had to wait 2-5 sec for HDD to initialize every time I wanted to access it, so switched completely to SSDs.
 
Well, to clarify, I completely switched to SSD(s) 3 years ago, since than had not any hdd in my comp. Was quite expensive than and deserved every "penny".
Nowadays I am thinking to wait 1 more year before buying a PCIe 5.0 SSD, due to prices.
Do you have a motherboard with PCIe 5.0 slots?
 
Yes, best to use HDD in NAS, externally. I have a couple of external HDDs for data backup. When I had mixed SSD with HDD in my comp I noticed that I had to wait 2-5 sec for HDD to initialize every time I wanted to access it, so switched completely to SSDs.
Turning off the power saving features in windows would have avoided that on the drive so its not trying to park the heads while idle. I use to do that because the delay when trying to access the data on the drive would annoy the crap out of me.
 
Not yet, prepare to buy Ryzen 7950X, AMD MB X670E and DDR5 when will be released, and only after Steve's review will vet them :)
Ok, I was just wondering what the value of PCI 5.0 would be on an old motherboard.

What kind of total budget and timeframe are you expecting for your build? Will you be getting the new GPUs coming out this winter or is this not for gaming at all?
 
Ok, I was just wondering what the value of PCI 5.0 would be on an old motherboard.

What kind of total budget and timeframe are you expecting for your build? Will you be getting the new GPUs coming out this winter or is this not for gaming at all?
Productivity and gaming :)
Yes, new CPU and GPU.
I planned to build a new comp on AM5 platform with new Ryzen and Radeon.
I think that Zen4 will be on par with Raptor Lake counterparts, based on so many "benchmarks" leaks, but I consider that AM5 platform is better and supported for longer time than Intel platform (4-5 years).
And I will buy Radeon 79xx because Nvidia counterparts will be like 1000 Euro more. Better to use this 1000 Euro for Ryzen, MB and DDR5 RAM.
Thus the budget will be around 3000 Euro for CPU, MB, RAM and GPU. Though I will sell my current Ryzen, MB and Ram so I will invest aprox 2300 Euro for the new rig.
 
Last edited:
Productivity and gaming :)
Yes, new CPU and GPU.
I planned to build a new comp on AM5 platform with new Ryzen and Radeon.
I think that Zen4 will be on par with Raptor Lake counterparts, based on so many "benchmarks" leaks, but I consider that AM5 platform is better and supported for longer time than Intel platform (4-5 years).
And I will buy Radeon 79xx because Nvidia counterparts will be like 1000 Euro more. Better to use this 1000 Euro for Ryzen, MB and DDR5 RAM.
Thus the budget will be around 3000 Euro for CPU, MB, RAM and GPU. Though I will sell my current Ryzen, MB and Ram so I will invest aprox 2300 Euro for the new rig.
DDR5 and AM5 mobs will come with a premium. I saw that MSI has released pricing for their AM5 boards, they seem to be $50-100 more expensive than current AM4 or Intel counterparts. DDR5 is well above DDR4 in cost right now. You won't have a DDR4 option with AM5.

There's also power consumption to consider and from what I've seen the AM5 CPUs are power hungry. Raptor Lake is supposed to have lower TDP, but we won't know for sure until we see them in the wild. With higher TDP that could mean larger power supplies and better cooling will be required for AM5. That may negate or reduce any pricing advantages.

From what I'm seeing right now, there's not a clear winner here. It could go either way.

I assume your GPU costing is based on past pricing. I don't see where Nvidia cards are $1000 EU (which is about $997 USD) over their Radeon counterparts. Current pricing puts 3090Ti cards at just over $1000 USD and $20 cheaper than RX6950XT. 3080 and 6900XT are about $40 difference (Nvidia more expensive) and 3070Ti is about $35 more expensive than the 6800XT.

I think in about 1-2 weeks we may see some pricing adjustments on current CPU and GPU models. It's frustrating because I'm ready to build my next rig but waiting to see how things shake out.

Now that may all change when new GPUs come out.
 
I have a massive Raid 5 array with 15TB spinners in it. I do look forward to the day they can be SSD's at a decent price though! Being a Data packrat can be expensive!
 
Until 10+TB SSDs become a financial reality for me as a consumer I will stick with spinning rust with exception to my boot drives.
Indeed, one of shortcomings switching only to SSD(s) is that often I have to manage the available free space for files and delete what is not strictly necessary.
 
Indeed, one of shortcomings switching only to SSD(s) is that often I have to manage the available free space for files and delete what is not strictly necessary.
Or have to purchase many more SSD's to add to the array (if you are using an array).
 
I've had at least two SSDs fail on me in the past three years, but not a single HDD.

I've had HDDs that lasted 8 years before I retired them simply b/c I needed more space.
 
IMHO it's really more of a comparison between failure rate due to manufacturing defects and failure rate due to mechanical wear. Most solid state hardware will fail in the first 90 days if it's going to fail, with a few going as long as a year or so. But any mechanical hardware will eventually wear out, it might take a really long time, but it's bound to eventually happen. So comparing SSD failure rate to HDD rate is a apples to oranges comparison to me.
 
IMHO it's really more of a comparison between failure rate due to manufacturing defects and failure rate due to mechanical wear. Most solid state hardware will fail in the first 90 days if it's going to fail, with a few going as long as a year or so. But any mechanical hardware will eventually wear out, it might take a really long time, but it's bound to eventually happen. So comparing SSD failure rate to HDD rate is a apples to oranges comparison to me.
Mechanical drives are subject to "manufacturing defects" as well", and I've had more SSD's fail due to manufacturing defects than HDD's.

These companies need to improve their manufacturing process. This rate of failure is unacceptable.
 
MX Crucial 250G (2017) failed on me a few weeks ago. Lasted 5 years. A couple of things may have caused the failure. A couple of months ago it fell on a tile floor. I thought I lost the computer then, but it kept on working. I've been pushing 80-90% fill rate, which I imagine couldn't be good for it. When I first got it, my first SSD, I did a lot of OS installs over 2 computers trying to place it. The first computer was an old (2013) computer, but the APU was too underpowered to even take advantage of it. It booted up fine and everything, but web browsing was no faster, denoting a processor deficit. I moved it to (2016) computer and it was fine. 3 years of hard use. I'm replacing it with a 500GB version so I shouldn't be having high utilization issues. Fewer installs. And will be more careful. I may not install Windows this time. I may go ahead and install Linux before it's too slow to run it. Still new to Linux and it's a HP Notebook with an AMD APU. I'm thinking Linux Mint. Any suggestions?
 
Last edited:
Back