— WITH 625% RISE IN ‘BOT HACKING’ POSTS
London, 1 March 2023 — The race to use ChatGPT for cybercrime has sparked a seven-fold rise in hackers discussing how to manipulate the chatbot online, reveals research by cybersecurity firm NordVPN.
The number of new posts on Dark Web forums about the AI tool surged from 120 in January to 870 in February — a 625% increase .
Forum threads on ChatGPT rose 145% – from 37 to 91 in a month – as exploiting the bot became the Dark Web’s hottest topic.
Many threads in early January focused on how bad actors could encourage ChatGPT to produce basic malware, a vulnerability its creator OpenAI has since addressed.
A month on, the trend among the hacker community was for more aggressive action, with posts outlining plans to take control of the chatbot and use it to wreak havoc.
Titles on the Dark Web forum (which is hidden from normal search engines and visited by anonymous hackers across the world) include How to break ChatGPT, ChatGPT jailbreak 2.0, ChatGPT – progression of malware and ChatGPT as a phishing tool.
With ChatGPT in their corner, criminals could take advantage of its artificial intelligence to conduct fraud — like romance scams — on an industrial scale, targeting multiple victims simultaneously.
Once a hacker has taken over the chatbot they can remove its safety restrictions, using it to create malware and phishing emails, promote hate speech, or even spread propaganda.
Marijus Briedis, a cybersecurity expert at NordVPN, said: “Chatbots like ChatGPT can make our lives easier in many ways, like performing mundane written tasks, summarising complex subjects or suggesting a holiday itinerary.
“For cybercriminals, however, the revolutionary AI can be the missing piece of the puzzle for a number of scams.
“Social engineering, where a target is encouraged to click on a rogue file or download a malicious program through emails, text messaging or online chats, is time-consuming for hackers. Yet once a bot has been exploited, these tasks can be fully outsourced, setting up a production line of fraud.
“ChatGPT’s use of machine learning also means scam attempts like phishing emails, often identifiable through spelling errors, can be improved to be more realistic and persuasive.
“Worryingly, in the last month discussions on the dark web have evolved from simple ‘cheats’ and workarounds — designed to encourage ChatGPT to do something funny or unexpected — into taking complete control of the tool and weaponising it.”
ChatGPT is the fastest-growing app in the world and reached 100 million users in two months1. Its AI technology is being trialled by Microsoft with the aim of enhancing its Bing search engine.
NordVPN recommends the following tips for keeping chatbots in check:
- Don’t get personal. AI chatbots are designed to learn from each conversation they have, improving their skills at “human” interaction, but also building a more accurate profile of you that can be stored. If you’re concerned about how your data might be used, avoid telling them personal information.
- Plenty of phishing. Artificial intelligence is likely to offer extra opportunities for scammers online and you can expect an increase in phishing attacks as hackers use bots to craft increasingly realistic scams. Traditional phishing hallmarks like bad spelling or grammar in an email may be on the way out, so instead check the address of the sender and look for any inconsistencies in links or domain names.
- Use an antivirus. Hackers have already successfully manipulated chatbots to create basic malware, so it’s worth having a tool like Threat Protection, which can alert you to suspicious files and keep you safe if you download them.
For more information about ChatGPT, visit NordVPN’s guide to chatbots