Europol warns of ChatGPT's potential criminal applications

midian182

Posts: 9,741   +121
Staff member
What just happened? It's amazing how much ChatGPT can do, from writing essays and emails to creating programming code. But its abilities are easily abused. The European Union Agency for Law Enforcement Cooperation (Europol) has become the latest organization to warn that criminals will use the chatbot for the likes of phishing, fraud, disinformation, and general cybercrime.

Europol notes that Large Language Models (LLMs) are advancing rapidly and have now entered the mainstream. Numerous industries are adopting LLMs, including criminal enterprises.

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol wrote. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Europol notes that ChatGPT's ability to draft text based on a few prompts makes it ideal for phishing attacks. These emails are usually identifiable by their spelling and grammatical errors or suspicious content, tell-tale signs that ChatGPT can avoid. The tool can also write in specific styles based on the type of scam, increasing the chances of a successful social engineering play.

Additionally, ChatGPT can produce authentic-sounding text at speed and scale, making it a perfect tool for propaganda and disinformation purposes.

But possibly the most dangerous aspect of ChatGPT is that it can write malicious code for cybercriminals who have little or no knowledge of programming. Europol writes that the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. "If prompts are broken down into individual steps, it is trivial to bypass these safety measures."

Based on previous reports, OpenAI's service is already being abused in this way. Security researchers in January discovered ChatGPT being used on cybercrime forums as both an "educational" tool and malware-creation platform. The chatbot could also be used to answer technical queries about hacking into networks or escalating privileges.

ChatGPT's uses aren't limited to creating specific texts or code. A potential criminal could use it to learn about a particular crime area, such as terrorism or child abuse. While this information could be found on the internet, ChatGPT makes it easier to discover and understand thanks to how the query result is presented. There's also the potential for creating a filter-free language model that could be trained on harmful data and hosted on the dark web.

Finally, Europol warns of the danger that ChatGPT user data, such as sensitive queries, could be exposed. This already happened a week ago when the service was temporarily shut down after it started showing other users' chat history titles. The contents were not exposed, but it was still a major privacy incident.

Europol isn't the only agency to warn of the potential dangers posed by chatbots. The UK's National Cyber Security Centre (NCSC) issued a similar warning earlier this month.

Masthead: Emiliano Vittoriosi

Permalink to story.

 
They're in the business of selling protection, so that’s why they sow the seeds of fear about potential dangers.

But the risks are infinitesimal and don't justify everyone paying huge sums of money through taxes to maintain useless structures, so it's fraud, which is criminal behavior.

For example, when a mother is told that her child is in danger, it is very easy to imagine someone as a potential threat, but in reality it is much more difficult to have someone who really cares about that little person who is only good at making fuss.

That's why it might be better for society if women didn't have political rights, because it's very easy to be fooled or frightened by a bad political actor (which is also a form of phising).
 
They're in the business of selling protection, so that’s why they sow the seeds of fear about potential dangers.

But the risks are infinitesimal and don't justify everyone paying huge sums of money through taxes to maintain useless structures, so it's fraud, which is criminal behavior.
As I see it, the threat is not infinitesimal. Maybe it is ATM, give it time, though, and more and more bad actors will pile on the boat. People are always looking for a way to make an easy buck.
For example, when a mother is told that her child is in danger, it is very easy to imagine someone as a potential threat, but in reality it is much more difficult to have someone who really cares about that little person who is only good at making fuss.
I think your argument is a straw man.There's a big difference between having a smart computer write malicious code for some Dumb A$$ and then having it steal identities, etc., and a mother, or father, for that matter, worrying about the safety of their children.
That's why it might be better for society if women didn't have political rights, because it's very easy to be fooled or frightened by a bad political actor (which is also a form of phising).
Misogynist much?
 
This is not about Chat GPT, this is about the clones that will come up and hold nothing against the law. I mean you cant get anything from ChatGPT if it would be breaking the law. Ive attempted that, in a different topic however such practices in other countries where perfectly legal.

The model is there. You could create a AI bot for every niche you can think of.
 
"Additionally, ChatGPT can produce authentic-sounding text at speed and scale, making it a perfect tool for propaganda and disinformation purposes."

Well, it's really awful that Mass Media has got disloyal competition, right?
So far you could expect convincing lies from state media, NGOs and various international organizations (e.g. WHO). Who could pay professional spin doctors, and even top scientists, to lie for them. But now, on top of this professional propaganda, ordinary Joe can produce convincing lies. Or what's even worse...... truth. We must stop that. Especially the truth.
 
Back