800.553.8359 info@const-ins.com

Google has released a key document detailing some information about how its latest AI model, Gemini 2.5 Pro, was built and tested, three weeks after it first made that model publicly available as a “preview” version.

AI governance experts had criticized the company for releasing the model without publishing documentation detailing safety evaluations it had carried out and any risks the model might present, in apparent violation of promises it had made to the U.S. government and at multiple international AI safety gatherings.

A Google spokesperson said in an emailed statement that any suggestion that the company  had reneged on its commitments was “inaccurate.”

The company also said that a more detailed “technical report” will come later when it makes a final version of the Gemini 2.5 Pro “model family” fully available to the public. 

But the newly published six-page model card has also been faulted by at least one AI governance expert for providing “meager” information about the safety evaluations of the model.

Kevin Bankston, a senior advisor on AI Governance at the Center for Democracy and Technology, a Washington, D.C.-based think tank, said in a lengthy thread on social media platform X that the late release of the model card and its lack of detail was worrisome.

“This meager documentation for Google’s top AI model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,” he said.

He said the late release of the model card and its lack key safety evaluation results—for instance, details of “red-teaming” tests to trick the AI model into serving up dangerous outputs like bioweapon instructions—suggested that Google “hadn’t finished its safety testing before releasing its most powerful model” and that “it still hasn’t completed that testing even now.”

Bankston said another possibility is that Google had finished its safety testing but has a new policy that it will not release its evaluation results until the model is released to all Google users. Currently, Google is calling Gemini 2.5 Pro a “preview,” which can be accessed through its Google AI Studio and Google Labs products, with some limitations on what users can do with it. Google has also said it is making the model widely available to U.S. college students.

The Google spokesperson said the company would release a more complete AI safety report “once per model family.” Bankson said on X that this might mean Google would no longer release separate evaluation results for fine-tuned versions of its models that it releases, such as those that have been tailored for coding or cybersecurity. This could be dangerous, he noted, because fine-tuned versions of AI models can exhibit behaviors that are markedly different from the “base model” from which they’ve been adapted. 

Google is not the only AI company seemingly retreating on AI safety. Meta’s model card for its newly released Llama 4 AI model is of similar length and detail to the one Google just published for Gemini 2.5 Pro and was also criticized by AI safety experts. OpenAI said it was not releasing a technical safety report for its newly-released GPT-4.1 model because it said that the model was “not a frontier model,” since the company’s “chain of thought” reasoning models, such as o3 and o4-mini, beat it on many benchmarks. At the same time, OpenAI touted that GPT-4.1 was more capable than its GPT-4o model, whose safety evaluation had shown that model could pose certain risks, although it had said these were below the threshold at which the model would be considered unsafe to release. Whether GPT-4.1 might now exceed those thresholds is unknown, since OpenAI said it does not plan to publish a technical report.

OpenAI did publish a technical safety report for its new o3 and o4-mini models, which were released on Wednesday. But at the same time, earlier this week it updated its “Preparedness Framework” which describes how the company will evaluate its AI models for critical dangers—everything from helping someone build a biological weapon to the possibility that a model will begin to self-improve and escape human control—and seek to mitigate those risks. The update eliminated “Persuasion”—a model’s ability to manipulate a person into taking a harmful action or convince them to believe in misinformation—as a risk category that the company would assess during it pre-release evaluations. It also changed how the company would make decisions around releasing higher risk models, including saying the company would consider shipping an AI model that posed a “critical risk” if a competitor had already debuted a similar model.

Those changes divided opinion among AI governance experts, with some praising OpenAI for being transparent about its process and also providing better clarity around its release policies, while others were alarmed at the changes. 

This story was originally featured on Fortune.com