Mark Zuckerberg, the CEO of Meta, has promised that one day artificial general intelligence (AGI), which is loosely defined as AI that can perform every task that a person can, will be publicly available. However, Meta notes in a new policy document that it may not distribute a highly proficient AI system that it developed internally in some circumstances.
The paper, which Meta is referring to as its Frontier AI Framework, lists two categories of AI systems that it deems too dangerous to make public: “high risk” and “critical risk” systems.
According to Meta, both “high-risk” and “critical-risk” systems can support chemical, biological, and cybersecurity attacks; the distinction is that “critical-risk” systems have the potential to produce a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.” Comparatively speaking, high-risk systems may make an assault simpler to execute, but they are not as dependable or consistent as critical risk systems.
What kind of attacks are we discussing here? Examples provided by Meta include the “proliferation of high-impact biological weapons” and the “automated end-to-end compromise of a best-practice-protected corporate-scale environment.” Although the business admits that the list of potential disasters in Meta’s document is far from all-inclusive, it does include those that Meta considers to be “the most urgent” and likely to occur as a direct result of deploying a potent AI system.
Surprisingly, the document states that Meta categorizes system risk based on the opinions of both internal and external academics, who are then reviewed by “senior-level decision-makers,” rather than on any one empirical test. Why? According to Meta, the science of evaluation is not “sufficiently robust as to provide definitive quantitative metrics” for determining the riskiness of a system.
ICYMT: Uganda begins Ebola vaccine trial after new outbreak
According to Meta, if a system is deemed high-risk, internal access will be restricted, and the system won’t be made public until mitigations have been put in place to “reduce risk to moderate levels.” However, Meta claims that if a system is judged to be critical-risk, it will suspend development until it can be rendered less risky and will put in place unidentified security measures to stop the system from being exfiltrated.
It seems that Meta’s Frontier AI Framework, which the business claims will change with the AI landscape and that it had previously promised to disclose before this month’s France AI Action Summit, is a reaction to criticism of the company’s “open” approach to system development. In contrast to businesses like OpenAI that choose to gate their systems behind an API, Meta has adopted a strategy of making its AI technology publicly available, even if it is not open source by the widely accepted definition.
The open release strategy has been both a boon and a bane for Meta. The company’s Llama family of AI models has been downloaded hundreds of millions of times. However, at least one U.S. enemy has apparently employed Llama to create a defense chatbot.
It’s possible that Meta wants to compare its open AI approach to that of Chinese AI company DeepSeek by releasing its Frontier AI Framework. Additionally, DeepSeek makes its systems publicly accessible. However, there aren’t many controls in place, and the company’s AI is easily manipulated to produce hazardous and toxic results.
“[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI,” Meta writes in the document, “it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.”
SOURCE: TECH CRUNCH