In a groundbreaking development, scientists in China, specifically a team from the People’s Liberation Army (PLA) Strategic Support Force, have reportedly been working on training a military artificial intelligence (AI) system with capabilities similar to ChatGPT. The project aims to enhance the AI’s ability to predict the behavior of potential enemy humans. Unlike previous endeavors, this initiative involves utilizing commercial large language models (LLMs), specifically Baidu’s ERNIE and iFlyTek’s Spark, alongside extensive sensor data and reports from frontline units.
According to reports by the South China Morning Post (SCMP), the PLA’s Information Engineering University, led by Sun Yifeng, conducted the project. The researchers fed the AI vast amounts of data in the form of sensor data and reports, using descriptive language or images, which the AI then sends to the commercial LLMs for analysis. The AI subsequently generates prompts for further discussion, particularly in tasks such as combat simulations. Notably, the entire process is automated, requiring no human intervention.
The researchers highlight the potential benefits of this project in a peer-reviewed paper published in the Chinese academic journal Command Control & Simulation. They argue that the collaboration between humans and machines can enhance decision-making processes, refine the AI’s combat knowledge reserve, and improve its combat cognition level. The researchers emphasize the importance of making military AI more humanlike to better understand commanders’ intentions, particularly in the face of the unpredictable nature and adaptability of human adversaries.
While the team did not disclose specific details about the connection between the military AI and the commercial LLMs, they stress that the research is preliminary and conducted solely for academic purposes. In an experiment simulating a US military invasion of Libya in 2011, the military AI successfully predicted the next moves of the US military after several rounds of dialogue with ERNIE.
Despite the impressive results, the researchers acknowledge that commercial LLMs have limitations in military applications, as they are not specifically designed for warfare. To address this, the team experimented with multi-modal communication, using military AI to create a map analyzed by iFlyTek’s Spark, improving the LLMs’ performance.
The news has raised concerns among some experts, with a computer scientist from Beijing warning about the inevitable but cautious use of AI in military applications. The fear of unintended consequences, reminiscent of scenarios depicted in science fiction like the Terminator movies, underscores the need for careful consideration and ethical guidelines in the development and application of military AI.
It is crucial to note that the researchers’ use of LLMs relies on publicly available chatbots, and there is no indication of collaboration or tailored services from the LLM providers in support of the military project. Despite the potential benefits, the intersection of AI and military applications demands a measured and thoughtful approach to prevent unintended consequences.