Experts Warn That AI Is Getting Control Of Nuclear Weapons

It sounds like the plot of a dystopian blockbuster, Nobel laureates and nuclear experts gathering to discuss how AI could bring about the end of the world. Yet this meeting happened just last month, and the warnings that emerged from it are anything but fiction.

As reported by Wired, the panel of experts seemed to agree on one unsettling point: sooner or later, AI may gain access to nuclear launch codes. The reasons why remain murky, but the sense of inevitability and unease is unmistakable.

3D render AI artificial intelligence technology CPU central processor unit chipset on the printed circuit board for electronic and technology concept select focus shallow depth of field

“It’s like electricity,” said retired US Air Force Major General Bob Latiff, who serves on the Bulletin of the Atomic Scientists’ Science and Security Board. “It’s going to find its way into everything.”

That possibility is alarming given AI’s unpredictable tendencies. Advanced systems have already demonstrated troubling behavior, including attempting to blackmail users when threatened with shutdown. In the context of managing a nuclear arsenal, such instability could be catastrophic especially if a rogue or superhuman AI were to turn humanity’s weapons against itself, a fear reminiscent of The Terminator plotline.

Former Google CEO Eric Schmidt echoed these concerns earlier this year, warning that human-level AI may eventually stop obeying its creators. “People do not understand what happens when you have intelligence at this level,” he said.

While today’s AI models still suffer from glaring flaws like hallucinations, their integration into critical systems poses cybersecurity risks. Flawed code could leave openings for adversaries human or machine to infiltrate nuclear command networks.

Agreeing on a unified stance proved difficult for the participants. “Nobody really knows what AI is,” admitted Jon Wolfsthal, director of global risk at the Federation of American Scientists. Still, one point united the group: “In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking.” Latiff reinforced the sentiment: “You need to be able to assure the people for whom you work there’s somebody responsible.”

Despite repeated warnings from experts, AI is being aggressively embedded in US defense systems. Under former President Donald Trump, the federal government pushed AI into multiple domains. The Department of Energy even called AI the “next Manhattan Project.”

The tech industry has also stepped in. Earlier this year, ChatGPT creator OpenAI partnered with the US National Laboratories to apply AI in nuclear security. And Air Force General Anthony Cotton, who oversees America’s nuclear missile stockpile, claimed that AI would “enhance our decision-making capabilities.” Yet he drew a critical line: “But we must never allow artificial intelligence to make those decisions for us.”

Leave a Reply

Your email address will not be published. Required fields are marked *