Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

4 hours ago 5

Anthropic said Thursday it “cannot in good conscience” comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities.

The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a “supply chain risk”, a designation with serious financial implications, if the company did not comply with the request by Friday.

Chief executive Dario Amodei said in a statement that the threats from the defense secretary, Pete Hegseth, would not change the company’s position, and that he hoped Hegseth would “reconsider”.

“Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said. “We remain ready to continue our work to support the national security of the United States.”

At the core of the Department of Defense and Anthropic’s standoff is a disagreement over how the AI company will permit its product, Claude, to be used. The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude, while Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems that can kill people without human input.

After months of dispute and pressure from the government, Hegseth reportedly gave Amodei until Friday evening to agree to the Pentagon’s demands or face punitive action.

Whether Anthropic would concede was seen as a high-profile test of its claim to be the most safety-conscious of the major AI firms, as well as whether any part of the AI industry would push back against government desires to use the technology for controversial, potentially lethal purposes.

In his statement, Amodei said using AI for autonomous weapons and mass domestic surveillance is “simply outside the bounds of what today’s technology can safely and reliably do”.

The Department of Defense has handed a number of lucrative deals to tech firms in recent years for the companies to build or integrate AI technology into US military systems. In July of last year, Anthropic was one of several big tech companies including Google and OpenAI to receive up to $200m contracts with the DoD. What set Anthropic apart, and has intensified its conflict with the Pentagon, is that until this week it was the only AI model that had been approved for use in the military’s classified systems. (Elon Musk’s xAI reached an agreement earlier this week to also be used in classified systems.)

Anthropic’s technology has reportedly already been used for military applications, including the US capture of Venezuelan leader Nicolás Maduro last month, highlighting the growing use of AI in conflict. The growth of autonomous weapons technology, such as drones that can carry out operations even after their connection to a human operator has been severed, has also intensified longstanding concerns around how AI will be used in life-and-death situations.

Anthropic and Amodei have long been some of the industry’s most prominent advocates for regulation and safety precautions in developing AI, even as they have struck deals with the military and this week watered down a core policy to not release new AI models without first guaranteeing their safety. Amodei’s calls for regulation, and history of political opposition to Donald Trump, have run up against Hegseth’s vows to remove “wokeness” from the armed forces and pursue aggressive military policies.

If Hegseth follows through with his threat to categorize Anthropic as a supply chain risk, it would be a huge blow to the AI company. The designation, which is more commonly intended to be used for foreign adversaries, would prohibit other vendors that do business with the US military from using Anthropic’s products.

Read Entire Article
International | Politik|