by Oli Buckley and Jason R.C. Nurse,

scammer chatbot
Credit: AI-generated image

Artificial intelligence (AI) tools aimed at the general public, such as ChatGPT, Bard, CoPilot and Dall-E have incredible potential to be used for good.

The benefits range from an enhanced ability by doctors to diagnose disease, to expanding access to professional and academic expertise. But those with criminal intentions could also exploit and subvert these technologies, posing a threat to ordinary citizens.

Criminals are even creating their own AI chatbots, to support hacking and scams.

AI's potential for wide-ranging risks and threats is underlined by the publication of the UK government's Generative AI Framework and the National Cyber Security Centre's guidance on the potential impacts of AI on online threats.

There are an increasing variety of ways that generative AI systems like ChatGPT and Dall-E can be used by criminals. Because of ChatGPT's ability to create tailored content based on a few simple prompts, one potential way it could be exploited by criminals is in crafting convincing scams and phishing messages.

A scammer could, for instance, put some basic information –- your name, gender and job title -– into a large language model (LLM), the technology behind AI chatbots like ChatGPT, and use it to craft a phishing message tailored just for you. This has been reported to be possible, even though mechanisms have been implemented to prevent it.

LLMs also make it feasible to conduct large-scale phishing scams, targeting thousands of people in their own native language. It's not conjecture either. Analysis of underground hacking communities has uncovered a variety of instances of criminals using ChatGPT, including for fraud and creating software to steal information. In another case, it was used to create ransomware.

Malicious chatbots

Entire malicious variants of large language models are also emerging. WormGPT and FraudGPT are two such examples that can create malware, find in systems, advise on ways to scam people, support hacking and compromise people's electronic devices.

Love-GPT is one of the newer variants and is used in romance scams. It has been used to create fake dating profiles capable of chatting to unsuspecting victims on Tinder, Bumble, and other apps.

As a result of these threats, Europol has issued a press release about criminals' use of LLMs. The US CISA security agency has also warned about generative AI's potential effect on the upcoming US presidential elections.

Privacy and trust are always at risk as we use ChatGPT, CoPilot and other platforms. As more people look to take advantage of AI tools, there is a high likelihood that personal and confidential corporate information will be shared. This is a risk because LLMs usually use any data input as part of their future training dataset, and second, if they are compromised, they may share that confidential data with others.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Cybercriminals are creating their own AI chatbots to support hacking and scam users (2024, February 9) retrieved 9 February 2024 from https://techxplore.com/news/2024-02-cybercriminals-ai-chatbots-hacking-scam.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.