Meta CEO Mark Zuckerberg has long championed the idea of making artificial general intelligence (AGI) openly available to the world. AGI, which represents AI capable of performing any human task, is an ambitious frontier in technology. However, despite Meta’s commitment to openness, a newly released policy document outlines scenarios where the company might withhold certain AI systems due to security concerns.
Dubbed the Frontier AI Framework, this document categorizes AI systems into two risk levels: “high risk” and “critical risk.” The former includes AI models that could facilitate cyberattacks or contribute to chemical and biological threats, while the latter encompasses systems that, if misused, could lead to devastating and uncontrollable consequences. Meta cites potential dangers such as the ability to fully compromise a secure corporate network or the spread of advanced biological weapons. While these examples offer insight into the risks Meta perceives, the company acknowledges that the list is not exhaustive but focuses on the most urgent threats.

One intriguing aspect of Meta’s approach is its method for assessing AI risk. Rather than relying on standardized empirical tests, the company bases its risk classifications on expert evaluations from internal and external researchers, overseen by high-level decision-makers. Meta defends this approach, stating that the field of AI safety lacks “sufficiently robust” scientific methods to establish precise risk metrics.
If an AI system is deemed high-risk, Meta intends to restrict access and implement safety measures before any public release. For critical-risk systems, however, the company plans to halt development altogether until sufficient safeguards can be introduced. Additionally, Meta commits to enhancing security measures to prevent unauthorized access to such sensitive AI models.
The release of the Frontier AI Framework appears to be a strategic move in response to growing concerns about Meta’s open AI development philosophy. Unlike OpenAI, which limits access to its models through controlled APIs, Meta has taken a more open approach, albeit without fully embracing open-source principles. While this strategy has fueled the rapid adoption of its AI models—such as the Llama family, which has seen hundreds of millions of downloads—it has also raised concerns. Reports indicate that at least one U.S. adversary has leveraged Meta’s AI to develop a military-oriented chatbot.

Meta’s publication of its framework may also serve to distinguish its approach from that of the Chinese AI firm DeepSeek. While DeepSeek similarly makes its AI widely available, it reportedly lacks strong safeguards, making it susceptible to misuse for generating harmful content.
In its policy document, Meta emphasizes the importance of balancing innovation with safety. The company states, “We believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.”