A group of 60 U.K. lawmakers has signed an open letter accusing Google DeepMind of violating its commitments to AI safety with the release of Gemini 2.5 Pro. The letter, published by political activist group PauseAI, accuses the AI company of breaking the Frontier AI Safety Commitments it signed at an international summit in 2024 by not releasing the AI model with key safety information.
At an international summit co-hosted by the U.K. and South Korea in February 2024, Google and other signatories promised to “publicly report” their models’ capabilities and risk assessments, as well as disclose whether outside organizations, such as government AI safety institutes, had been involved in testing.
However, when the company released Gemini 2.5 Pro in March 2025, the company failed to publish a model card, the document that details key information about how models are tested and built. This was despite the company’s assertions that the new model outperformed competitors on industry benchmarks by “meaningful margins.” Instead, the AI lab released a simplified six-page model card three weeks after it first made the model publicly available as a “preview” version. At the time, one AI governance expert called this report “meager” and “worrisome.”
The letter called Google’s delay a “failure to honour” the company’s commitment at the summit and “a troubling breach of trust with governments and the public.” The letter also took issue with what it called a “minimal ‘model card’” that lacked “any substantive detail about external evaluations,” as well as Google’s refusal to confirm whether government agencies like the U.K. AI Security Institute participated in testing.
A spokesperson for Google DeepMind previously told Fortune that any suggestion that the company had reneged on its commitments was “inaccurate.” The company also said in May that a more detailed “technical report” would come later when it makes a final version of the Gemini 2.5 Pro “model family” fully available to the public. The company appeared to provide a longer report in late June, months after the full version was released.
The company did not immediately respond to Fortune’s request for comment about the recent letter. However, as a spokesperson told Time: “We’re fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments.”
“As part of our development process, our models undergo rigorous safety checks, including by UK AISI and other third-party testers—and Gemini 2.5 is no exception,” they added.
When Google first released the preview version of Gemini 2.5 Pro, critics said that the missing system card appeared to violate several other pledges the AI company had made, including the 2023 White House Commitments and a voluntary Code of Conduct on Artificial Intelligence signed in October 2023.
Google wasn’t the only company to sign these pledges and then appear to pull back on safety disclosures. Meta’s model card for its frontier Llama 4 model was about as brief and limited in detail as the one Google released for Gemini 2.5 Pro, and it, too, drew criticism from AI safety researchers.
Earlier this year, OpenAI announced it would not publish a technical safety report for its new GPT-4.1 model. The company argued that GPT-4.1 is “not a frontier model,” since its reasoning-focused systems like o3 and o4-mini outperform it on many benchmarks.
The recent letter calls on Google to reaffirm its commitments to AI safety, asking the tech company to define deployment clearly as the point when a model becomes publicly accessible, commit to publishing safety evaluation reports on a set timeline for all future model releases, and provide full transparency for each release by naming the government agencies and independent third parties involved in testing, along with the exact testing timelines.
“If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,” Lord Browne of Ladyton, a Member of the House of Lords and one of the letter’s signatories, said in a statement.
Great Job Beatrice Nolan & the Team @ Fortune | FORTUNE Source link for sharing this story.