Cracking passwords using Nvidia's latest GTX 1080 GPU (it's fast)

Julio Franco

Posts: 9,099   +2,049
Staff member

During the last few years, progress on the CPU performance front has seemingly stopped. Granted, last-generation CPUs are cool, silent and power-efficient. Anecdotal evidence: my new laptop, a brand new MacBook, is about as fast as the Dell ultrabook it replaced. The problem? I bought the Dell laptop some five years ago. The Dell was thicker and noisier, its battery never lasted longer than a few hours, but it was about as fast as the new MacBook.

Editor’s Note:
Guest author Oleg Afonin works for Russian software developer Elcomsoft. The company is well known for its password recovery tools and forensic solutions. This article was originally published on the Elcomsoft blog.

Computer games have evolved a lot during the last years. Demanding faster and faster video cards, today’s games are relatively lax on CPU requirements. Manufacturers followed the trend, continuing the performance race. GPUs have picked up where CPUs have left.

Nvidia recently released their new GeForce GTX 1080 graphics card based on the new Pascal architecture. Elcomsoft Distributed Password Recovery 3.20 added support for the new architecture. What does it mean for us?

GPU Acceleration: The Present and Future of Computer Forensics

Today’s desktop video cards pack significantly more grunt compared to contemporary desktop CPUs. The powerful GPU units can deliver unmatched performance in massively parallel computations, offering 100 to 200 times greater performance compared to CPUs. All this performance is still relatively useless when it comes to regular computing.

The several hundred individual GPU cores are built specifically for “one code, different data” scenarios, while general-use CPUs can run different code on each kernel. Since breaking passwords involves executing the same code repeatedly, just with different data (encryption keys or passwords), a large array of GPU units makes lots of sense.

How does it scale to real-world applications? A low-end Nvidia or AMD board will deliver 20 to 40 times the performance of the most powerful Intel CPU. A high-end accelerator such as the Nvidia GTX 1080 can crack passwords up to 250 times faster compared to a CPU alone.

Just how important is GPU acceleration, exactly? As an example, a common 6-character password (lower-case letters with numbers) has just about 2 billion combinations. If that password protects a Microsoft Office 2013 document, you’ll spend 2.2 years trying all possible combinations. Using the same computer, add a single GTX 1080 card, and the same password will be cracked in under 83 hours. That’s 3.5 days vs. 2.2 years!

Nvidia Pascal Architecture

Nvidia’s latest GPU architecture gives a significant performance boost compared to Nvidia’s previous flagship. With 21 half-precision teraflops, GTX 1080 boards are 1.5 to 2 times faster breaking passwords compared to GTX 980 units.

According to ElcomSoft’s internal benchmarks, Elcomsoft Distributed Password Recovery can try 7,100 passwords per second for Office 2013 documents using a single Nvidia GTX 1080 board compared to 3,800 passwords per second on an Nvidia GTX 980. When recovering RAR 5 passwords, using a single GTX 1080 results in 25,000 passwords per second compared to 13,000 passwords per second on a GTX 980.

Cannot see the numbers for CPU-based benchmarks without a magnifying glass? In case you wonder, we were only able to try 30 (yes, thirty) MS Office 2013 passwords per second on an Intel Xeon E5 2603 without GPU acceleration. Compare that to 7,100 passwords per second using a single Nvidia GTX 1080 board!

Nvidia Pascal is a major break-through in GPU computations. If you need a reliable powerhouse to break passwords faster, consider adding a GTX 1080 board to your workstation.

What if your computer already has a GTX 980 installed? If you have a free PCIe slot and sufficient cooling, and if your computer’s power supply can deliver enough juice for an extra GTX 1080 board, then you can just add the new board without removing the old one. Elcomsoft Distributed Password Recovery will use both GPUs together for even faster attacks.

Does it make sense keeping a GTX 980 along with the new GTX 1080? By keeping the old card together with the new GTX 1080, you’ll get an additional performance boost of about 20 to 30 percent. Whether this extra performance is worth the increased power consumption and excess heat is debatable, but if your power supply and cooling can reliably manage both cards working at their maximum performance, by all means go for it!

Permalink to story.

 
"Nvidia Pascal is a major break-through in GPU computations. If you need a reliable powerhouse to break passwords faster, consider adding a GTX 1080 board to your workstation."


Woah calm down there buddy lets not give anyone any bright ideas
 
"During the last few years, progress on the CPU performance front has seemingly stopped."

Yeah - and that's why I consider Moore's law to essentially be dead/dying for CPUs. We're reaching fundamental physical limits. It was always bound to happen sooner or later.

Moore's law was never an actual law in the same sense as the laws of physics. It was just an observation of what would happen assuming no physical constraints. The real laws of physics that you learn in physics classes actually state that the constraints do exist, and there are fundamental limitations to how fast we can compute things. CPUs are starting to hit these limits, and there will come a day when GPUs will start hitting these limits as well.

GPUs get around conventional CPU limitations by using a completely different architecture that focuses on massive parallelism. This is at the expense of single threaded performance.

A CPU is actually much faster than a GPU for a single thread. A single shader on a GPU will perform miserably compared to a single core of a CPU. And it's not just because CPUs are running at a much higher GHz, but also because most of the transistors on the CPU are dedicated to making a single thread super fast.

. . . which is why, when I was building my current machine, I bought a CPU that had excellent single threaded performance - because Kerbal Space Program, a game that I own, uses the CPU for its physics.

That said: If you have a task that can be made parallel, the GPU does indeed win, hands down. With 2560 cores, it doesn't matter if each individual core is slower. It's still an enormous leap over my machine's 4 cores (8 if you count hyperthreading). Even the most parallel CPUs are about a dozen cores or so. It's doubtful we'll be seeing thousands of CPU cores on a single chip any time soon.

"In case you wonder, we were only able to try 30 (yes, thirty) MS Office 2013 passwords per second on an Intel Xeon E5 2603 without GPU acceleration."

CPUs are certainly slower than GPUs when it comes to this type of cracking, but a multi-GHz machine only managing 30 passwords a second? Sounds terribly optimized.

"All this performance is still relatively useless when it comes to regular computing."

Mostly because parallelism is hard to do for the vast majority of conventional computing. Even the experts can't make absolutely everything parallel.
 
Once the need for faster CPUs happens, you'll see innovations that make them faster again.... quantum computing, etc... the reason CPU speeds have stagnated is simply because there is no NEED to make them much faster.
 
Once the need for faster CPUs happens, you'll see innovations that make them faster again.... quantum computing, etc... the reason CPU speeds have stagnated is simply because there is no NEED to make them much faster.

I'm pretty sure the engineers at Intel and AMD are hard at work trying to move things along. Maybe not necessarily for raw horsepower, but for overall efficiency. A more efficient CPU can be cooler and use less power, which is still desirable for mobile devices. And yes, if they can find a way to make things faster, they'll still try to make it happen.

. . . and yes, there is still a need. Maybe not for your personal PC, but for scientific computing, supercomputers, and server farms.

. . . and yes, we are hitting fundamental limits. We're at a size where quantum mechanics matters, where heat is a big problem, and where feature sizes can easily be measured in atoms. There's not a whole lot of room left.

. . . and how much quantum computing can help us is very much an open question. There are certain problems that are much faster with quantum computing, but there are also problems where quantum computing doesn't appear to have any benefit.

Quantum computing is also currently very limited. Only a few qubits have been entangled so far. Barely enough to compute using single digit numbers. A long cry from the hundreds of qubits needed for doing things like factoring numbers for cracking certain kinds of encryption.

There's actually active research into what are called "quantum hard" types of encryption. These are types of encryption that do not appear to be vulnerable to quantum computers. Time will tell if that stays the case, but so far it does appear that even quantum computing isn't going to magically break all encryption.

. . . and who is to say that progress in quantum computing will be infinite? It will likely have barriers as well. The laws of physics do not like things to be unlimited. And who is to say that it will have Moore's-law like progress? It is entirely different, and may progress differently.

We have no guarantee of perpetual breakthroughs, and we have no guarantee that Moore's law will be perpetual. In fact, the laws of physics tend to abhor letting things scale indefinitely. We've just been a very long ways from physical limits for a long time.
 
I'm pretty sure the engineers at Intel and AMD are hard at work trying to move things along. Maybe not necessarily for raw horsepower, but for overall efficiency. A more efficient CPU can be cooler and use less power, which is still desirable for mobile devices. And yes, if they can find a way to make things faster, they'll still try to make it happen.

. . . and yes, there is still a need. Maybe not for your personal PC, but for scientific computing, supercomputers, and server farms.

. . . and yes, we are hitting fundamental limits. We're at a size where quantum mechanics matters, where heat is a big problem, and where feature sizes can easily be measured in atoms. There's not a whole lot of room left.

. . . and how much quantum computing can help us is very much an open question. There are certain problems that are much faster with quantum computing, but there are also problems where quantum computing doesn't appear to have any benefit.

Quantum computing is also currently very limited. Only a few qubits have been entangled so far. Barely enough to compute using single digit numbers. A long cry from the hundreds of qubits needed for doing things like factoring numbers for cracking certain kinds of encryption.

There's actually active research into what are called "quantum hard" types of encryption. These are types of encryption that do not appear to be vulnerable to quantum computers. Time will tell if that stays the case, but so far it does appear that even quantum computing isn't going to magically break all encryption.

. . . and who is to say that progress in quantum computing will be infinite? It will likely have barriers as well. The laws of physics do not like things to be unlimited. And who is to say that it will have Moore's-law like progress? It is entirely different, and may progress differently.

We have no guarantee of perpetual breakthroughs, and we have no guarantee that Moore's law will be perpetual. In fact, the laws of physics tend to abhor letting things scale indefinitely. We've just been a very long ways from physical limits for a long time.

No one can predict the future... quantum computing was just an example of future CPU technologies... we have no real clue that we are "approaching the physical limits" - for all we know, a revolutionary new tech could materialize tomorrow that makes our current CPUs seem like snails...

Until we CAN accurately predict the future however, I'm going to base my assumptions on the past... not perfect, but tends to get results... I can't prove the sun will rise tomorrow, but past performance dictates it's a high probability that it will...

Same with CPU speeds... until something comes along to prove otherwise, I'm going to go with, "CPUs will speed up when the demand dictates it."
 
"Nvidia Pascal is a major break-through in GPU computations. If you need a reliable powerhouse to break passwords faster, consider adding a GTX 1080 board to your workstation."

Woah calm down there buddy lets not give anyone any bright ideas
You honestly think black hats won't be thinking of this well before this article? White hats and security systems designers need to factor these leaps in hardware out when speccing encryption.
 
"With 21 half-precision teraflops"
I'm pretty sure the GP104 chip doesn't have any real support for fast half-precision computing. It's only got one FP16x2 core in each SM (that consists of 128 FP32 cores), which translates to abysmal half-precision performance. In fact, it's FP16 to FP 32 perf. ratio is 1:64. Only the GP100 chip (found in Tesla cards) sports double FP16 performance. So something's not right here.
 
Tell me exactly how this is to work, if the security is programmed to lock you out after a third attempt for a short time? Does this cracking assume there will not be a lockout? Or does cracking also bypass the lockout functions? Every time I read these articles I never understand the core of the possibility behind it. I don't see how it could possibly work with proper security in place.
 
May I ask why you didn't add any AMD products? Seems a bit bad to not add those products for comparison in terms of cracking passwords..
 
Tell me exactly how this is to work, if the security is programmed to lock you out after a third attempt for a short time? Does this cracking assume there will not be a lockout? Or does cracking also bypass the lockout functions? Every time I read these articles I never understand the core of the possibility behind it. I don't see how it could possibly work with proper security in place.

I'm guessing the kind of cracking going on here is when a cracker has full-access to the encrypted data. Passwords are generally stored in encrypted format in a single file. So the hacker needs access to the data and knowledge of the algorithm/s used to encrypt/decrypt the data.

These are the conditions necessary that the article is assuming to be the case.

The "proper security" in place that you speak of would be assuming the encrypted password/s file/files are never compromised, that is copied to a remote location to do the dirty work. Which the internet shows us almost daily(exaggeration here) is not something you should be assuming. It happens way too frequently.
 
Last edited:
Tell me exactly how this is to work, if the security is programmed to lock you out after a third attempt for a short time? Does this cracking assume there will not be a lockout? Or does cracking also bypass the lockout functions? Every time I read these articles I never understand the core of the possibility behind it. I don't see how it could possibly work with proper security in place.
For these applications and benchmarks they are usually assuming offline cracking.

RAR5, Office documents, Bitlocker - all offline cracking.
 
Once the need for faster CPUs happens, you'll see innovations that make them faster again.... quantum computing, etc... the reason CPU speeds have stagnated is simply because intel has no competition and AMD's zen launch is too far away.
*fixed for special effects.
 
I have used Elcomsoft software for cracking Wi-Fi passwords 3 years ago with AMD GPU's and this program was first developed with AMD hardware, so all this GTX1080 is the best is bullshit because it's beaten by Fury with 4096 cores and yes, the core count makes the difference.
 
Back