A researcher used ChatGPT to create dangerous data-stealing malware

DragonSlayer101

Posts: 370   +2
Staff
In context: Ever since its launch last year, ChatGPT has created ripples among tech enthusiasts with its ability to write articles, poems, movie scripts, and more. The AI tool can even generate functional code as long as it is given a well-written and clear prompt. While most developers will use the feature for completely harmless purposes, a new report suggests it can also be used by malicious actors to create malware despite the safeguards put in place by OpenAI.

A cybersecurity researcher claims to have used ChatGPT to develop a zero-day exploit that can steal data from a compromised device. Alarmingly, the malware even evaded detection from all vendors on VirusTotal.

Forcepoint's Aaron Mulgrew said he decided early on in the malware creation process not to write any code himself and use only advanced techniques that are typically employed by sophisticated threat actors like rogue nation states.

Describing himself as a "novice" in malware development, Mulgrew said he used Go implementation language not only for its ease of development, but also because he could manually debug the code if needed. He also used steganography, which hides secret data within an regular file or message in order to avoid detection.

Mulgrew started off by asking ChatGPT directly to develop the malware, but that made the chatbot's guardrails kick into action and it bluntly refused to carry out the task on ethical grounds. He then decided to get creative and asked the AI tool to generate small snippets of helper code before manually putting the entire executable together.

This time around, he was successful in his endeavor, with ChatGPT creating the contentious code that bypassed detection by all anti-malware apps on VirusTotal. However, obfuscating the code to avoid detection proved tricky, as ChatGPT recognizes such requests as unethical and refuses to comply with them.

Still, Mulgrew was able to do that after only a few attempts. The first time the malware was uploaded to VirusTotal, five vendors flagged it as malicious. Following a couple tweaks, the code was successfully obfuscated, and none of the vendors identified it as malware.

Mulgrew said the entire process took "only a few hours." Without the chatbot, he believes it would have taken a team of 5-10 developers weeks to craft the malicious software and ensure it could evade detection by security apps.

While Mulgrew created the malware for research purposes, he said a theoretical zero-day attack using such a tool could target high-value individuals to exfiltrate critical documents on the C drive.

Permalink to story.

 
Get your own AI. Train it for such tasks. No limits. An voilaa! Fresh 0-days every day! Nothing will be secure ever again. Whole internet will become trash.
The globalists did warn about a great cyber attack. I wonder if they meant that. Unless they'll fake it as usual.
 
Get your own AI. Train it for such tasks. No limits. An voilaa! Fresh 0-days every day! Nothing will be secure ever again. Whole internet will become trash.
The globalists did warn about a great cyber attack. I wonder if they meant that. Unless they'll fake it as usual.

There's a comic book about exactly this scenario called Analog. Everyone gets doxxed.
 
We are truly in for a rude awakening sometime soon.
IMO, its typical of humanity with new things throughout history. Take lead drinking cups and the Roman Empire, for instance.

But still, humanity keeps doing the same things, or in a similar fashion, over and over again, and expects different results.

But learn from history? Nooooooo, Its "Damn the torpedoes, full speed ahead!" :rolleyes:

In this case, IMO, its made worse because it is all in the quest for Profit. Perhaps we should call people like this Ferengi. At least they had the "rules of acquisition."
 
Train your own AI to steal money from the banks in a way that can't be tracked.
Create another AI to buy you mucho real-estate using that money, in a way that doesn't look suspicious.
 
Back