Site icon Wonderful Engineering

The USAF Reportedly Wants 6 Billion To Build Unmanned Planes Flown By AI

With an audacious $6 billion budget proposal, the US Air Force is making a serious push into the world of unmanned aerial vehicles (UAVs). This money represents an important step forward in the military’s pursuit of cutting-edge technologies and will assist the research, development, and production of at least 1,000, and maybe more, unmanned aircraft flown by AI pilots.

One of the prominent contenders in this endeavor is the XQ-58A Valkyrie aircraft. Designed to function as a robotic wingman alongside human-piloted aircraft, the Valkyrie can provide strategic cover and maneuver in scenarios that might pose challenges for human pilots. Its capabilities extend to missions with a high risk of fatality, where a human presence would be improbable to survive.

The Valkyrie’s forthcoming testing are keenly anticipated. The UAV will use a simulation over the Gulf of Mexico to show off its autonomous decision-making skills by coming up with plans to pursue and take out a target, demonstrating the advancement of AI-driven combat capabilities.

The Valkyrie has extraordinary capabilities, including a top speed of 550 mph, a maximum operational height of 45,000 feet, and a staggering range of 3,000 nautical miles. This technology completes earlier research projects aimed at enhancing the capabilities of the Air Force.

The proposed budget encompasses $5.8 billion in planned expenses spread over five years for the development of collaborative combat aircraft, including systems like the Valkyrie. These efforts follow extensive test flights where the Valkyrie was used as a datalink for various aircraft and as part of the Air Force’s Skyborg program, which focuses on AI-enabled control of UAVs.

While the push for unmanned aerial vehicles presents military advancements, it also raises ethical and moral concerns. Critics, including human rights advocates and organizations like the Future of Life Institute, emphasize the potential dangers of autonomous weapons systems, dubbing them “slaughterbots.” Concerns range from rapid conflict escalation to the risk of creating weapons of mass destruction, ultimately challenging established norms in warfare.

Leaders from around the world have voiced concern over the usage of this technology. António Guterres, the secretary-general of the United Nations, has previously argued that devices that have the ability to end human lives without human intervention are morally and politically wrong and that such technology should be banned internationally.

Finally, the US Air Force’s ambitious pursuit of AI-piloted unmanned aircraft is evidence of the military’s dedication to remaining at the forefront of technological advancement. However, the moral ramifications and possible outcomes of these developments highlight the demand for critical deliberation and global discussion on the future of combat.

Exit mobile version