Snowden used basic 'web crawler' software to scrape data from NSA

By on February 9, 2014, 9:15 AM
nsa, top secret, spider, classified documents, edward snowden, web crawler

Edward Snowden’s treasure trove of classified National Security Agency documents has forever changed how the world views privacy. But as everyone got caught up in the ongoing leaks, nobody has publically questioned exactly how Snowden got his hands on the information – until now.

One would be forgiven to think that Snowden used some elaborate hacking software to siphon top secret documents during his time as a contractor for the agency. But truth be told, his methods were much more rudimentary.

According to a senior intelligence official, Snowden used ordinary “web crawler” software typically used to search, index and back up a website. The software ran in the background while he worked, scraping the agency’s systems and ultimately accessing an estimated 1.7 million files. Although it appears as though he did set specific search parameters such as which subjects to search for and how deeply to dig, it should have easily been detected.

Key to Snowden’s success was the fact that he was working for the agency in Hawaii, a location that had not yet been outfitted with the latest security measures.

Had he been at the NSA’s headquarters in Fort Meade, Maryland, he almost certainly would have been caught. Agency officials said systems at that location are constantly monitored and accessing / downloading a large volume of data would have been noticed.

Snowden’s behavior did draw attention a few times but officials say he was able to get off the hook by providing legitimate-sounding explanations for his activity.

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.