EXPLAINER

Anthropic vs the Pentagon: Why AI firm is taking on Trump administration

Anthropic was the first AI developer to be used in classified operations by the US Defense Department.

Anthropic CEO Dario Amodei has been ordered to loosen terms of use for the firm’s AI products by the US government [File: Ludovic Marin/AFP]

By Sarah Shamim, Reuters and The Associated Press

Published On 25 Feb 202625 Feb 2026

Save

A row is simmering between the United States government and Anthropic, one of the tech companies that develops artificial intelligence (AI) tools for defence and civilian uses.

According to recent reports, Anthropic’s Claude software was used in a US military operation, which resulted in the abduction of Venezuelan President Nicholas Maduro in January this year.

US Defense Secretary Pete Hegseth has given the company until Friday to loosen its rules about how its AI tools can be used by the Pentagon, or risk losing its government contract, The Associated Press and Reuters news agencies reported on Tuesday, quoting unnamed sources.

But Anthropic is refusing to back down over safeguards which prevent its technology from being used to conduct US domestic surveillance and to programme autonomous weapons which can hit targets without human intervention.

What is Anthropic?

Anthropic is an AI company founded in 2021 by former OpenAI executives.

It was the first AI developer to be used in classified operations by the US Defense Department, which is housed at the Pentagon in Washington, DC.

Anthropic is best known for building Claude, a popular large language model (LLM) and has rapidly become one of the most prominent AI development companies.

LLM is a type of AI technology which generates text, visual or audio output similar to content created by humans after analysing massive datasets such as books, archives, websites, pictures and videos.

For military and defence use, LLMs can summarise large volumes of text, analyse data, translate, transcribe and draft memos. In theory, they can also be used to support autonomous or semi-autonomous weapons systems, which can identify and hit targets without the need for human instruction. However, most AI companies have terms that prohibit this use.

Advertisement

Anthropic positions itself as a “responsible” developer in the AI landscape. On its website, the company describes itself as a “Public Benefit Corporation” committed to the “responsible development and maintenance of advanced AI for the long-term benefit of humanity”.

In November, the company alleged that a Chinese state-sponsored hacking group had manipulated the Claude code in an attempt to infiltrate about 30 targets globally, including government agencies, chemical companies, financial institutions and tech giants. Some of these attempts were successful.

Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned from his position over concerns about the use of AI.

In a statement posted on his X account on February 9, Sharma wrote: “The world is in peril. And not just from AI, or bioweapons, but from whole series of interconnected crises unfolding in this very moment.”

“Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too,” he added.

Which other AI companies does the US military work with?

The Pentagon announced last summer that it was awarding defence contracts to four AI companies – Anthropic, Google, OpenAI and xAI. Each contract is worth up to $200m.

Anthropic was the first AI company to be approved for classified military networks, on which it reportedly works with partners like US software company Palantir Technologies, which has been criticised for its links to the Israeli military. Elon Musk’s xAI, which operates the Grok chatbot, says Grok is also ready to be used in classified settings, according to an unnamed senior Pentagon official, AP reported.

But the Trump administration wants to be able to use the products of these AI companies without restrictions. Hegseth said his vision for military AI systems means that they operate “without ideological constraints that limit lawful military applications”, before adding that the Pentagon’s “AI will not be woke”.

Why is Anthropic at odds with the Pentagon?

Sources reported that at a meeting on Tuesday, Hegseth gave Anthropic CEO Dario Amodei until Friday, 5pm (22:00 GMT) to agree to provide Anthropic’s AI models for use on the Pentagon’s new internal network with fewer restrictions.

Officials at the US Defense Department warned they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products even if it doesn’t approve of how they are used, according to a person familiar with the meeting and a senior Pentagon official, neither of whom were authorised to comment publicly and spoke on condition of anonymity, AP reported.

Advertisement

Amodei has also previously raised ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” he wrote in an essay last month.

The person familiar called the tone of the Tuesday meeting “cordial” but said Amodei refused to budge on two key issues – fully autonomous military targeting operations and domestic surveillance of US citizens.

In a podcast appearance on Tuesday in which he explained his refusal to give in to the Pentagon’s demands, Amodei reiterated his concerns around “autonomous drone swarms” – likely autonomous drones which can attack targets without human input – and mass surveillance.

“The constitutional protections in our military structures depend on the idea that there are humans who would disobey illegal orders with fully autonomous weapons,” Amodei said, noting that autonomous drones would not be able to make such a distinction.

The Pentagon objects to Anthropic’s ethical restrictions because military operations require tools which do not have built-in limitations, the senior Pentagon official said. The official argued that the Pentagon has issued only lawful orders and stressed that using Anthropic’s tools legally would be the military’s responsibility.

How was Claude used in Venezuela?

On January 3, US special forces abducted Maduro, who remains in US custody and faces trial on drugs and weapons charges in New York.

US media reports revealed on February 14 that Anthropic’s Claude had been used in the operation to strike Caracas and capture Maduro.

An unnamed Anthropic official approached by The Wall Street Journal declined to comment on whether Claude, or any other AI model, was used in any operation. However, the official did say that any use of Claude in the private sector or by the government would need to be in compliance with Claude’s usage policies.

According to the usage policies listed on Anthropic’s website, Claude cannot be used for surveillance, the development of weapons or “inciting violence”.

A total of 83 people, including 47 Venezuelan soldiers, were killed during the US special operation in Venezuela.

US media have also reported that Anthropic has partnered with Palantir Technologies, whose tools are also used by the Defense Department and by federal law enforcement agencies.

It is unclear how exactly Claude was used during the raid on Caracas in January, but AI tools can be used to control drones, analyse images and summarise intercepted communications.

In July 2025, Francesca Albanese, the United Nations special rapporteur on human rights in the occupied Palestinian territory, released a report mapping the corporations aiding Israel in the displacement of Palestinians and its genocidal war on Gaza, in breach of international law.

Advertisement

The report found that Palantir had expanded its support to the Israeli military since the start of its genocidal war on Gaza in October 2023.