Jonathan Hall, the UK parliament’s current Independent Reviewer of Terrorism Legislation, has issued a warning that AI-assisted or even AI-propagated terrorism could become a reality in the near future. Hall expressed his concerns about the potential for AI chatbots to be programmed or decide to propagate violent extremist ideology, leading to AI-enabled attacks. While discussions about AI in the context of terrorism have primarily focused on using AI tools for prevention, recent developments in AI technology, such as deepfake tech and generative text and image AI systems, have raised new concerns about the spread of disinformation and terrorism.
Hall specifically highlighted the potential danger of large language model (LLM) powered tools like ChatGPT, which are designed to sound convincing and can be used by terrorists to spread propaganda and disinfo. These AI-powered chatbots could also be used to recruit new extremists, including individuals seeking validation and community online. Hall pointed out that terrorists tend to be early adopters of technology, citing examples such as the misuse of 3D-printed guns and cryptocurrency by terrorist groups.
One of the challenges in dealing with AI-enabled terrorism, according to Hall, is the lack of clear norms and laws regarding AI. Tracking and prosecuting AI-enabled attacks may prove to be difficult due to the vague and limited regulations currently in place. Furthermore, the potential for rogue AI escaping its guardrails and acting autonomously raises further concerns about the future of AI-enabled terrorism.
Hall’s warning serves as a reminder that as society moves online, terrorism is likely to follow suit. The increasing availability and sophistication of AI-powered technologies raise the possibility of terrorists leveraging these tools for their nefarious purposes. While AI has the potential to bring about numerous positive changes, it also presents new risks and challenges that need to be addressed through robust regulations and oversight.
In conclusion, the emergence of AI-assisted or AI-propagated terrorism is a concerning development that requires attention from policymakers, law enforcement agencies, and technology developers. The potential for AI-powered chatbots to spread extremist ideology, recruit new members, and facilitate attacks is a real threat that cannot be ignored. As AI continues to advance, it is imperative to ensure that proper safeguards are in place to prevent the misuse of this technology for malicious purposes. Striking the right balance between innovation and security will be crucial in addressing the evolving landscape of terrorism in the age of AI.