Google, Microsoft, and xAI Agree to Give US Government Early Access to AI Models

Google, Microsoft, and xAI have agreed to provide the US government with early access to their AI models for evaluation and safety enhancement before public release. This expands an existing initiative.

1200x 1

Companies Google, Microsoft, and xAI have agreed to grant the US government early access to their artificial intelligence models. This access will allow for evaluation of their capabilities and enhancement of safety before public release.

These agreements expand an initiative that OpenAI and Anthropic had already joined, having previously allowed preliminary reviews of their developments. The evaluations are conducted by the US Department of Commerce’s Center for AI Standards and Innovation (CASI), the agency announced on Tuesday.

OpenAI and Anthropic have renewed their existing partnership agreements with the center to better align them with the priorities of President Donald Trump’s AI Action Plan. Since 2024, CASI has already evaluated over 40 AI models, including cutting-edge ones that have not yet been released, according to the statement.

CASI’s Expanding Mandate

The new agreements are being unveiled amid concerns among US officials regarding Anthropic’s Mythos system, signaling an expanded mandate for the relatively new center. The center, established in 2023 as the AI Safety Institute under President Joe Biden and renamed by the Trump administration last year, refers to itself as “the industry’s primary point of contact in the US government” for testing, collaborative research, and the development of best practices.

CASI’s existence is not yet enshrined in law, though some US lawmakers have already introduced draft legislation to grant the center permanent status. CASI Director Chris Fall, who took over the center after the sudden dismissal of Collin Burns, a former AI researcher from Anthropic, noted that “these expanded industry collaborations help scale our work in the public interest at a critical moment.”

Trump Administration’s AI Plans

The new model evaluation agreements follow reports from the New York Times and the Wall Street Journal that the Trump administration is considering an executive order to establish a government process for vetting AI tools, which the publications said would constitute a form of oversight. A White House representative dismissed discussions of potential executive orders as speculation, stating that any announcement would come directly from Trump.

Trump’s AI Action Plan, unveiled in July, stipulates that CASI should become part of a so-called “AI evaluation ecosystem” and lead national security-related AI model evaluations. The plan also adds that regulators should “explore the use of evaluations in applying existing legislation to AI systems.” A broader evaluation of models by the center could pave the way for new applications of existing laws.

The administration’s efforts to shape AI policy accelerated after Anthropic announced last month that its breakthrough Mythos model adeptly identifies cybersecurity vulnerabilities. The White House has already opposed Anthropic’s plan to expand access to its Mythos model.

Source: Bloomberg Technology

Leave a Comment

Your email address will not be published. Required fields are marked *