AI is transforming industries across the globe, and one of its most controversial applications is in modern warfare. The Pentagon has openly embraced AI, touting its ability to streamline military operations and, disturbingly, speed up lethal processes.
In a recent interview with TechCrunch, Radha Plumb, the Pentagon’s chief digital and AI officer, shared his thoughts on how AI is reshaping military strategy. Plumb’s comments offered a stark glimpse into the Pentagon’s AI-driven advancements, particularly in the military’s “kill chain” — a term defined by a 2023 Mitchell Institute white paper as the series of steps militaries use to attack targets. These steps include “find, fix, track, target, engage, and assess.”
Speaking candidly, Plumb admitted, “We obviously are increasing the ways in which we can speed up the execution of kill chain, so that our commanders can respond in the right time to protect our forces.” While the stated goal of protecting military personnel may sound noble, the underlying implication is clear: AI is being used to accelerate lethal decision-making in combat scenarios.

Plumb highlighted how generative AI plays a critical role in the planning and strategizing phases of military operations. “Playing through different scenarios is something that generative AI can be helpful with,” she explained. This includes using AI to assess threats, evaluate response options, and analyze potential trade-offs.
By simulating countless possibilities, AI equips commanders with the tools to respond swiftly and decisively in high-pressure situations. However, as Plumb pointed out, humans remain involved in final decision-making processes — at least for now. “As a matter of both reliability and ethics, we’ll always have humans involved in the decision to employ force, and that includes our weapon systems,” she insisted.
The Pentagon’s transparent embrace of AI in warfare raises unsettling ethical questions. While Plumb emphasized the importance of human oversight, her admission that AI is actively used to “dream up scenarios that would require lethal force” paints a dystopian picture of the future of combat.

The reassurance of human involvement may not ease all concerns, especially given the rapid advancements in autonomous weaponry. If future administrations, like Donald Trump’s rumored return to power, push for reduced oversight, the balance between ethical considerations and operational efficiency could tip dangerously in favor of the latter.