Backblaze's Q3 report shows large capacity drives are more reliable

midian182

Posts: 9,745   +121
Staff member
In brief: Cloud storage provider Backblaze has released its latest quarterly hard drive stats. For Q3 2018, Western Digital’s 6TB WD60EFRX had the highest annual failure rate (AFR)—4.46%—though only five of these 383 drives failed last quarter. But the main takeaway was that its large-capacity drives are more reliable than their smaller cousins.

Backblaze, which was the pick of storage providers in our 'Back to school' feature, had 99,636 spinning hard drives during Q3, of which 1866 were boot drives and 97,7770 were data drives. The company removed the last of its 3TB Western Digital drives during the last quarter, replacing them with 12TB drives.

The 79 HGST 12TB drives models it added in Q3 have a 0 percent failure rate, though that’s not too surprising, given their low numbers and fewer drive days. But 12TB HDDs are reliable in general: Backblaze uses 25,101 12TB Seagate drives of this capacity and only had 187 drive failures last quarter; they have an AFR of 1.29 percent.

According to Backblaze, large-capacity drives (8TB – 12TB) are more reliable than the smaller drives, boasting annual failure rates of 1.21 percent or lower.

"The failure rates of all of the larger drives (8TB, 10TB, and 12TB) are very good: 1.21 percent AFR (Annualized Failure Rate) or less. In particular, the Seagate 10TB drives, which have been in operation for over 1 year now, are performing very nicely with a failure rate of 0.48 percent," writes the company.

Backblaze notes that its overall drive failure rate of 1.71 percent is now the lowest it has ever achieved, beating its previous low of 1.82 percent from Q2 2018.

Permalink to story.

 
I'm amazed always when a big company like Backblaze using commercial drives and sharing data how they perform in intense environment.
I would like to see how the enterprise/business driver perform over the years and the given the fact that non-enterprise drives can be as good as business counterparts.
 
I'm amazed always when a big company like Backblaze using commercial drives and sharing data how they perform in intense environment.
I would like to see how the enterprise/business driver perform over the years and the given the fact that non-enterprise drives can be as good as business counterparts.

When talking about SATA drives, the main difference is probably only the firmware.

That's how companies like HP get away with charging $300 for a simple 1TB Seagate drive.
 
I'm amazed always when a big company like Backblaze using commercial drives and sharing data how they perform in intense environment.
I would like to see how the enterprise/business driver perform over the years and the given the fact that non-enterprise drives can be as good as business counterparts.

I recall them actually covering that in one of their earlier reports, and their conclusion was 'not worth it for us'. The increased reliability couldn't justify the increased cost, compared to just buying multiple consumer drives and mirroring them. For a high reliability, high up-time environment with limit drive space, enterprise level drives make sense. For a massive data storage server farm (32.1 million PB, by my count), where you are being paid to maintain data for safe keeping, you might as well just keep a supply of cheap drives as hot and cold spares, and drop them in as failures occur.
 
Very cool. There were some comments that brought up HDD failures rates. How do SSD failure rates compare? My dad's Mushkin 120GB SSD (about 8 years old) reported problems, but nothing was lost. They were fast for their time, but the sata connector was prone to break on those. I had another of the same model that had a similar problem with the PCB on the drive. I was barely able to get the data off of it by luck on getting the PCB torqued just right for sata to read the drive. :)
 
Very cool. There were some comments that brought up HDD failures rates. How do SSD failure rates compare? My dad's Mushkin 120GB SSD (about 8 years old) reported problems, but nothing was lost. They were fast for their time, but the sata connector was prone to break on those. I had another of the same model that had a similar problem with the PCB on the drive. I was barely able to get the data off of it by luck on getting the PCB torqued just right for sata to read the drive. :)

SSDs usually don't just fail. Most of them will die from writes. The issue you are describing sounds like it might have been a design defect specific to that drive.
 
Good to hear but is nobody else concerned that even 1% of drives end up dying?
That's pretty high imo but then again I doubt they'd be accessed anywhere near the same level in a home environment so might be nothing to worry about.
 
SSDs usually don't just fail. Most of them will die from writes. The issue you are describing sounds like it might have been a design defect specific to that drive.

Yes, it was. A hardware design issue. That is why I was wondering about SSD failure rates (chip), although it could be argued if that would include the hardware side of them as well (like this model experienced). I believe it would otherwise you have to include the PCB part of HDDs in failures as well. Of course, consumers who handle drives somewhat more often (and wrecklessly vs a datacenter) would be more susceptible.

I would venture to say that just about anything mechanical vs electronic has a much higher failure rate.
 
~1% failure rate is lower than Backblaze's historical failure rates, with some problematic Seagate 1.5 and 3TB models heading upwards of 10% annualized failure. These numbers are pretty good and continue a trend of:

Improving dependability year over year
HGST has the lowest failure rate
WD has the highest failure rate
Seagate is somewhere in the middle and is the cheapest which is why their drive counts are so high.

Remember the quote from Contact, which is very important when backing up: "Why buy one when you can buy two for twice the price!"
 
~1% failure rate is lower than Backblaze's historical failure rates, with some problematic Seagate 1.5 and 3TB models heading upwards of 10% annualized failure. These numbers are pretty good and continue a trend of:

Improving dependability year over year
HGST has the lowest failure rate
WD has the highest failure rate
Seagate is somewhere in the middle and is the cheapest which is why their drive counts are so high.

Remember the quote from Contact, which is very important when backing up: "Why buy one when you can buy two for twice the price!"

What's weird is that HGST is Western Digital.
 
All my data is on Seagates, WDs, HGSTs, and Samsungs. RAID0. Doesn't matter as it's all in triplicate. Basically the same thing as what Backblaze does and yes, I've needed to replace some of BB's failure-prone 1.5TB and 3TB Seagates. It simply was not a big deal as with 2 backups, there's little worry of losing everything. And with no Parity, there's no risk of a failed rebuild.
 
Back