Pentagon Pressures Anthropic on AI Use

Pete Hegseth Pentagon
  • U.S. Defense Secretary Pete Hegseth has given Anthropic a deadline to allow unrestricted military use of its AI systems or risk losing its government contract.
  • The company has resisted demands that conflict with its ethical guidelines, particularly around autonomous targeting and domestic surveillance.
  • The dispute highlights growing tensions between national security priorities and AI safety concerns.

Military Pushes for Broader Access to AI Tools

Defense Secretary Pete Hegseth reportedly told Anthropic CEO Dario Amodei that the company must open its AI technology for unrestricted military use by Friday. The warning came during a meeting in Washington, where officials said the Pentagon could classify Anthropic as a supply‑chain risk or invoke the Defense Production Act to gain broader authority over its systems. Anthropic, which develops the Claude chatbot, is the only major AI firm that has not yet supplied its technology to a new internal military network. Amodei has repeatedly expressed concerns about AI being used for autonomous weapons or mass surveillance.

Defense officials argued that military operations require tools without built‑in limitations. They said the Pentagon issues only lawful orders and that legal compliance would be the military’s responsibility. The meeting was described as cordial, but Amodei did not shift on two core boundaries: no fully autonomous targeting and no domestic surveillance of U.S. citizens. These restrictions have become a central point of contention between the company and the Department of Defense.

Anthropic’s Unique Position Among AI Contractors

The Pentagon awarded contracts last summer to Anthropic, Google, OpenAI and xAI, each worth up to $200 million. Anthropic became the first AI company approved for classified military networks, where it collaborates with partners such as Palantir. The other companies currently operate only in unclassified environments. By early 2026, Hegseth was publicly praising only Google and xAI, saying he would disregard AI models that impose constraints on military operations.

He later announced that xAI’s Grok chatbot would join the Pentagon’s GenAI.mil network. The decision came shortly after Grok faced global criticism for generating sexually explicit deepfake images without consent. OpenAI also agreed in February to provide a custom version of ChatGPT for unclassified military tasks. Anthropic, however, has maintained a more cautious stance, emphasizing safety and responsible deployment.

Ethical Commitments Create Political Tensions

Anthropic has long positioned itself as a safety‑focused AI company, founded by former OpenAI researchers who left over concerns about responsible development. The Pentagon dispute is testing that identity, according to analysts who say the company risks losing influence if it refuses to align with military expectations. Other major AI firms, including Meta, Google and xAI, have accepted the Pentagon’s policy of using models for all lawful applications. This leaves Anthropic with limited leverage as the Department of Defense accelerates its adoption of AI systems.

The company has also clashed with the Trump administration on several policy fronts. It criticized proposals to loosen export controls on AI chips and opposed certain state‑level regulatory efforts backed by the administration. Trump’s top AI adviser, David Sacks, accused Anthropic of fear‑based lobbying aimed at shaping regulation. Despite these disagreements, the company has attempted to present a bipartisan image by adding former Trump official Chris Liddell to its board.

Experts warn that the Pentagon’s rapid integration of AI highlights the need for stronger congressional oversight. Civil liberties groups have raised concerns about potential uses of AI in domestic surveillance, especially as the technology becomes more capable. The Brennan Center’s Amos Toh noted that the law is struggling to keep pace with technological change, emphasizing that the Department of Defense does not have unlimited authority. The Anthropic dispute may become a defining case in how the U.S. balances national security demands with emerging AI safety norms.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.