Europol warns of criminal use of ChatGPT.

Europol warns of cybercriminal organizations can take advantage of systems based on artificial intelligence like ChatGPT.

Andrey Plat
3 min readApr 30, 2023

EU police body Europol warned about the potential abuse of systems based on artificial intelligence, such as the popular chatbot ChatGPT, for cybercriminal activities. Cybercriminal groups can use chatbot like ChatGPT in social engineering attacks, disinformation campaigns, and other cybercriminal activities, such as developing malicious code.

OpenAI’s ChatGPT is becoming even more attractive for cybercriminal organization that are valuating how to use its enormous capabilities.

“As the capabilities of Large Language Models (LLMs) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook.” reads the alert published by the Europol. “

The following three crime areas are amongst the many areas of concern identified by Europol’s experts:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

According to the Europol, technologies like ChatGPT can speed up each phase of an attack chain significantly.

“As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse.” states the report published by Europol. “The identified use cases that emerged from the workshops Europol carried out with its experts are by no means exhaustive. Rather, the aim is to give an idea of just how diverse and potentially dangerous LLMs such as ChatGPT can be in the hands of malicious actors.”

The chatbot can be also abused by threat actors with little or no technical knowledge to carry out fraudulent activities, such as the development of malicious code.

The European police warns of expected improvements of capabilities of generative models such as ChatGPT. GPT-4, the latest release, has already made significant improvements over its previous versions, its capabilities can provide more effective assistance for cybercriminal activities.

“The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource.” concludes the report. “At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi.”

The report highlights the importance to prepare law enforcement community on both positive and negative AI-based applications that may affect their daily business. Read the full report at this link.

https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement

--

--

Andrey Plat
Andrey Plat

Written by Andrey Plat

Blockchain projects, promotion and development. Open source intelligence (OSINT). Non-standard tasks, with non-standard execution.

No responses yet