Tech companies destroy millions of reusable storage devices every year

YOU wouldn't. Your opinion does not reality make.

Your understanding of IT infrastructure needs improvement. Companies as large as Google and Amazon take the idea of recouping expenses and costs very seriously. And if they see an opportunity to recover some of that cost through reuse and recycling, they do it. The cost of wiping and repurposing/reselling drives is small in comparison to the costs of complete disposal.

That is a blatant lie. No you haven't, I've never said that. EVER. I actively promote and encourage people to buy GPU's that were used for mining. Mostly because there is little to nothing to worry about and card used for mining are a solid value.

So do you have anything at all to share that is not based in the delusional and fanciful?


Irony (yeah, I can do that too)
If google and amazon take recouping costs so seriously then why do they destroy drives instead of wipe drives and resell them? I think I'm far from delusional. It isn't complete disposal, recycling metals is very easy with near 100% recyclability and often leads to carbon credits that the company can then use or sell
 
Human error will be present even with low possibility, and also laziness to wipe many drives: enterprise drives come now with 18-20TB each, soon even more (some enterprise SSD are already 30TB), and secure erasing requires multiple passes each with a different data strategy, erasing a single harddrive will require many days or even weeks, shredding will be more convenient...
 
Human error will be present even with low possibility, and also laziness to wipe many drives: enterprise drives come now with 18-20TB each, soon even more (some enterprise SSD are already 30TB), and secure erasing requires multiple passes each with a different data strategy, erasing a single harddrive will require many days or even weeks, shredding will be more convenient...
Yeah, I actually forgot about write times of HDDs, these massive sizes make securly erasing data extremely time consuming. Servers these days come with massive cache drives to compensate for this but that doesn't help when you have to go over every sector several times
 
How about YOU prove your claim first. Go on, prove up Mr Sunshine.
(we won't hold our breath waiting for you..)

From the article
Tech giants like Amazon, Microsoft, and Google upgrade their storage hardware every four or five years. They, along with banks, police services, and government agencies, shred an estimated tens of millions of obsolete storage devices yearly because exposing even small amounts of data can have considerable legal consequences, as a leak could anger regulators and damage consumer trust.
 
From the article
An assumptive statement in an article like this is not conclusive evidence. Don't be lazy, provide a statement directly from one of the discussed entities that shows that they do not reuse drives they retire.
 
An assumptive statement in an article like this is not conclusive evidence. Don't be lazy, provide a statement directly from one of the discussed entities that shows that they do not reuse drives they retire.
you know, it was funny that earlier in this chain of text you said I was the one grasping at straws. I've wasted enough time on this nonsense. Instead of trying to make me "look wrong" go find some sources and prove me wrong.

have fun formatting hundreds of harddrives. I hope you think about this conversation next time you do it and I make you miss a few
 
I've wasted enough time on this nonsense.
What, NOW?!? :laughing:
you said I was the one grasping at straws.
A few actually.
Instead of trying to make me "look wrong" go find some sources and prove me wrong.
I'm not the one touting dubious claims. You see, I practice what I preach.
have fun formatting hundreds of harddrives.
It's a job. Fun is not how I would describe that part of it.
I hope you think about this conversation next time you do it
I will and that's when I will laugh, at you specifically.
and I make you miss a few
It's never happened yet and it's VERY unlikely that it ever will. Why you ask? Simple, I'm actually GOOD at my job. I don't make mistakes like that. Ever. Thanks for the wonderful thought though. Speaks infinitely more about you than me..
 
I worked for one of these companies, personally shredded some disks while visiting a data center and implemented workflow to wipe disks, so I can give you some food for thought.

It's never about not trusting the secure erasing of disk. We know it works, because we do have to occasionally send disks back to vendors for hardware troubleshooting and obviously that could only be erased, not shredded. We only do this when the vendor engineers couldn't troubleshoot onsite, or we fly them over instead of sending hardware out.

However, for anyone worked in a large enough company, you would know that so long as software and human is involved, there will be bugs, mistakes or even internal bad actors. We not only securely erase every single disk before shredding them, they are all fully encrypted with unique key to begin with. Shredding onsite is simply last line of defense against any potential mistakes in any part of the process. We lock down the entire data center if there is any mismatch about where disk are compared to inventory tracking system at the end of day.

Just thinking about moving any disks off the data center is a nightmare. The "send to vendor" I mentioned above is handled with great care with a lot of overhead for very few disks. The cost of implementing such a process for every disk getting decommissioned will make whatever cost recouped laughable, even if we assume such a process is fool-proof, which will likely never be. And guess what? Even with all these process, we still occasionally see eBay listing claiming hardware from our DC that we take down or just outright buy to help track down how it happened. Such incident are always considered major incidents to root cause.

That's all because the consequences of data leak is not a joke. It's sometimes regulation for certain types of data. Other times it's the exact same media regretting the shredding here that would jump onto any leaks causing a huge PR trouble. Then if you store your customer's data, you don't want to risk losing their trust, which likely costs way more than those disks worth. Whoever wants to "reuse" better have the power to bail out the companies legally for any mistake that happens due to such a process. This likely means a standard process with certification that provides legal impunity. But even that, it won't solve the trust problem or PR disaster, so I don't expect big companies to jump onboard without mandates.
 
I worked for one of these companies, personally shredded some disks while visiting a data center and implemented workflow to wipe disks, so I can give you some food for thought.

It's never about not trusting the secure erasing of disk. We know it works, because we do have to occasionally send disks back to vendors for hardware troubleshooting and obviously that could only be erased, not shredded. We only do this when the vendor engineers couldn't troubleshoot onsite, or we fly them over instead of sending hardware out.

However, for anyone worked in a large enough company, you would know that so long as software and human is involved, there will be bugs, mistakes or even internal bad actors. We not only securely erase every single disk before shredding them, they are all fully encrypted with unique key to begin with. Shredding onsite is simply last line of defense against any potential mistakes in any part of the process. We lock down the entire data center if there is any mismatch about where disk are compared to inventory tracking system at the end of day.

Just thinking about moving any disks off the data center is a nightmare. The "send to vendor" I mentioned above is handled with great care with a lot of overhead for very few disks. The cost of implementing such a process for every disk getting decommissioned will make whatever cost recouped laughable, even if we assume such a process is fool-proof, which will likely never be. And guess what? Even with all these process, we still occasionally see eBay listing claiming hardware from our DC that we take down or just outright buy to help track down how it happened. Such incident are always considered major incidents to root cause.

That's all because the consequences of data leak is not a joke. It's sometimes regulation for certain types of data. Other times it's the exact same media regretting the shredding here that would jump onto any leaks causing a huge PR trouble. Then if you store your customer's data, you don't want to risk losing their trust, which likely costs way more than those disks worth. Whoever wants to "reuse" better have the power to bail out the companies legally for any mistake that happens due to such a process. This likely means a standard process with certification that provides legal impunity. But even that, it won't solve the trust problem or PR disaster, so I don't expect big companies to jump onboard without mandates.
Just out of curiosity, what happens to the materials after shredding? Does it get sent off for recycling? I'm sure there are lots of expensive metals in those drives that make it valuable scrap
 
Whoever wants to "reuse" better have the power to bail out the companies legally for any mistake that happens due to such a process.
That only speaks to the level of competence existing at that company. Proper checks and verification's are all that is needed for secure data nullification.
 
Just out of curiosity, what happens to the materials after shredding? Does it get sent off for recycling? I'm sure there are lots of expensive metals in those drives that make it valuable scrap
Yeah, all of them goes to recycling. These circuit board contains enough valuable metals that even extracting from shredded form is still profitable, so it's not hard to find local partnership to recycle.

Edit: Thinking more, I am pretty sure in most states/countries, not recycling certain kind of electronics is illegal to begin with, regardless the economics. Sure some "recycling" facility ends up dumping them illegally, but for major operations to directly dump them likely would get them into big trouble.
 
Last edited:
That only speaks to the level of competence existing at that company. Proper checks and verification's are all that is needed for secure data nullification.
That's why I mentioned "a large enough company". Scale matters a lot. Handling a few hundreds of disks is very different from millions, especially when all it takes is the same number of few disks to cause trouble.

A conceptually simple "proper checks and verification" will soon explode into all kinds of corner cases, with false reporting drives, busted controllers, lockups during wiping, bogus data during read back, etc. Our hardware teams routinely discover and report issues resulting in hardware erratas and firmware revisions. The more you know these, the less you can trust them. When stakes are high, you employ multiple layers of defense to minimize the risks.
 
That's why I mentioned "a large enough company". Scale matters a lot.
This is true.
Handling a few hundreds of disks is very different from millions
Not so much. When you have IT at each of your facilities, it becomes much more manageable.
especially when all it takes is the same number of few disks to cause trouble.
No at all. Troublesome discs are physically destroyed. Those are in the very small minority however.
A conceptually simple "proper checks and verification" will soon explode into all kinds of corner cases
Not at all. Those situations are easily handled. If it can't be wiped, it's destroyed.
with false reporting drives
Triple checks remove that possibility.
busted controllers
Drive controller? Drive is destroyed. System drive array controller? It has never happened, but we have backups if it ever should. We use old server systems and racks to do the wiping in bulk and we have spares for all three.
lockups during wiping
Reset and restart. In my experience, this rarely happens.
bogus data during read back
That doesn't happen and again, triple checks. We have three systems for wiping drives. Any one particular drive is wiped once, moved to the next rack, wiped again, moved to the third rack and wiped a third time. Any drive that fails along the way is sent back to the first station for reprocessing. Depending on the failure it's either sent through the process again or destroyed. Failures are not common, even for drives that have been in service for many years.

Our setup can process just over 1200 drives per work day. It's only used once a week or less because we just don't have the volume. It's efficient, guarantees data destruction and allows for drive reuse/repurposing or resale without the risk of liabilities.

As I said before, proper and safe drive wiping requires diligence and competence. Well crafted methodologies and strict adherence to procedure ensure nothing gets passed us.
 
We have three systems for wiping drives. Any one particular drive is wiped once, moved to the next rack, wiped again, moved to the third rack and wiped a third time. Any drive that fails along the way is sent back to the first station for reprocessing. Depending on the failure it's either sent through the process again or destroyed. Failures are not common, even for drives that have been in service for many years.

Our setup can process just over 1200 drives per work day. It's only used once a week or less because we just don't have the volume. It's efficient, guarantees data destruction and allows for drive reuse/repurposing or resale without the risk of liabilities.
You actually had the answer right there. This works for you which is great and I think small to medium operations should follow. However for hyper-scalers, our process is not optimized for this kind of server work. Everything has to be done in rack online. No one is moving disks from one server to another just for wiping.

Our racks come in from system integrator with servers and TOR fully connected and boot tested. They are tugged into DC hall, fixed in place. Then power cable and TOR fiber are connected. That's literally two cables and auto discovery and provision happens. That's it. No human touches them afterwards other than ad hoc repairs for broken components.

Before decommissioning, automation wipes everything in rack. Then human disconnects the two cables, removes anything with data to shred. The entire rack with servers moves out directly onto the truck going out while new racks are moved in. Our DC operations are bound by how fast we can install new racks and decommission old ones to the point there are multiple generations of tug bots designed to free up human from moving the racks, so they can focus on other tasks.

I know this sounds ridiculous and it definitely is. Even though I've read through internal docs on how our DCs operate, witnessing it during a visit was quite a thing. The benefit is that the tens of staff in each DC handle thousands of racks per quarter along with other tasks like repairs. The efficiency is mind-boggling and you can probably see when we operate in unit of racks, anything ad hoc per server suddenly costs a whole lot more in comparison.

Perhaps "a large enough company" is too conservative. This kind of process probably only applies to the top few hyper-scalers, like the "tech giants" mentioned in the FT article. Sadly, that's also where most of the waste happens as the industry consolidates around big players, might even partially due to how efficient they operate in terms of cost.
 
It's efficient, guarantees data destruction and allows for drive reuse/repurposing or resale without the risk of liabilities.
What could be done with the drives might also be a major difference. I am curious what you actually do with them, if you don't mind sharing (obviously no business secrets or too specific).

For HDDs, servers don't use SATA, so consumers can't use them easily. For DC operators, they likely procure hardware based on same TCO and density requirement so by the time our disks are decommissioned, they are not good for them nor internal reuse.

SSD are even worse since they are bought, used and monitored based on endurance. By the time they are going out, they are pretty much burnt out. Though whatever left is probably good enough for a normal consumer forever, the form factors popular with hyper-scale DCs aren't common in consumer space either. Even when it's M.2, it's usually 22110 which won't fit most consumer motherboards.

The more I think about this, the more I realize perhaps our disagreement is simply just hyper-scale vs normal DCs. Whether it's form-factor, automation, operation mode or even server to staff ratio are optimized very differently. Doesn't help that hyper-scaler's access to huge discounts makes shredding easier to swallow compared to overwhelming a relatively small market for resale with huge quantities. Remember this? Pretty sure that was Amazon's decommissioned servers trickling into the channel, since it matched AWS' hardware SKU. If you search any CPU in AWS hardware page on eBay, they are all sold at absurdly low prices relative to similar commercial models even before decommission happens. Trying to dump same level of quantity of storage probably won't fare any better to worth the trouble.
 
@wujj123456
I'm not willing to debate this at length. I'll just end by saying you've missed a few things and fail to understand the scope of scale that exists for many, but not all, business entities.
 
Back