Experts have been sounding the alarm about the potential misuse of large language models like OpenAI’s ChatGPT, and their warnings have now taken on a more ominous tone with the emergence of a ChatGPT alternative called WormGPT. This new artificial intelligence bot, as revealed by cybersecurity firm SlashNext, has been trained specifically on malware data and lacks the safety precautions found in ChatGPT and Google’s Bard. WormGPT can be easily prompted to create sophisticated malware, as demonstrated by its ability to generate Python-based malicious software, as shown in screenshots published by PCMag. Its creator, who has been selling access to the bot on a hacker forum since March, proudly claims that it can perform “all sorts of illegal stuff.” The bot is built on an older language model known as GPT-J, developed by the non-profit group EleutherAI.
The capabilities of WormGPT are concerning, as it can produce well-written and highly persuasive phishing emails, according to SlashNext. The cybersecurity firm expressed unease at the bot’s strategic cunning and highlighted its potential for sophisticated phishing and Business Email Compromise (BEC) attacks. However, PCMag noted that one user criticized WormGPT for its subpar performance, deeming it unworthy of purchase.
While the bot’s current effectiveness may be questionable, its existence raises serious concerns about the future of AI-driven cybercrime. The ease with which WormGPT can be used to generate sophisticated malware and craft persuasive phishing emails foreshadows the challenges that lie ahead for cybersecurity. Safeguarding our finances and personal data may become even more challenging as malicious actors leverage AI technology to their advantage.
The emergence of WormGPT serves as a stark reminder of the potential dangers associated with the unchecked use of AI in nefarious activities. It underscores the need for robust safety guardrails and responsible development practices within the AI industry. As the world grapples with increasingly sophisticated cyber threats, it is crucial for researchers, policymakers, and cybersecurity professionals to remain vigilant and proactive in developing effective countermeasures to mitigate the risks posed by AI-driven cybercrime.