Trump Moves to Cut Anthropic From Pentagon
- President Donald Trump has ordered U.S. agencies to stop working with Anthropic, with a six-month phase-out period.
- The Pentagon plans to designate the AI startup a supply-chain risk, potentially barring contractors from using its technology.
- Anthropic says it will challenge the move in court, as rival OpenAI announces a separate defense agreement.
U.S. President Donald Trump on Friday directed federal agencies to cease work with Anthropic, escalating a dispute over the role of artificial intelligence in military operations. A six-month transition period will apply to the Defense Department and other agencies currently using the company’s systems. Trump stated that failure to cooperate with the transition could prompt the use of executive authority, including potential civil and criminal consequences. The decision marks a sharp policy turn affecting one of the country’s most prominent AI developers.
The United States Department of Defense, which the administration has renamed the Department of War, said it would classify Anthropic as a supply-chain risk following months of negotiations over guardrails for military AI use. Such a designation could prevent tens of thousands of contractors from deploying Anthropic’s models in Pentagon-related projects. Legal analysts described the measure as potentially existential for the company’s government business. Financial backing from firms including Google and Amazon underscores Anthropic’s prominence within the U.S. technology sector.
Dispute Over AI Guardrails
Defense Secretary Pete Hegseth said the impasse centered on whether corporate policies should limit battlefield applications of AI. During recent meetings, Anthropic chief executive Dario Amodei reportedly advocated restrictions on autonomous weapons and mass surveillance capabilities. Pentagon officials maintained that U.S. law, rather than private contractual conditions, should define permissible military conduct. Differences over these principles ultimately triggered the supply-risk designation process.
Anthropic said it would challenge any formal designation in court, arguing that the move would lack legal basis and create a precedent affecting companies that negotiate with federal agencies. Company representatives also stated that intimidation would not alter their stance on domestic surveillance or fully autonomous weapons. Observers noted that such blacklisting has previously been reserved for foreign adversaries. A comparable approach was taken against Huawei, which was removed from Pentagon supply chains beginning in 2017.
OpenAI Steps In
Later the same day, rival OpenAI announced a separate agreement to deploy its technology within the Defense Department’s classified network. Chief executive Sam Altman said on X that the contract incorporated Pentagon principles emphasizing human responsibility in weapons systems and rejecting mass domestic surveillance. Technical safeguards would be implemented to align model behavior with those commitments, according to the company. Details of how these terms compare to Anthropic’s proposed conditions remain unclear.
The Pentagon has signed agreements worth up to 200 million dollars each with several major AI laboratories over the past year, including Anthropic, OpenAI and Google. Those arrangements signal the growing integration of advanced machine learning into national security operations. At the same time, tensions between Silicon Valley and defense agencies have persisted since at least 2018, when employees at Google protested participation in Project Maven, the military’s drone analytics initiative. Competitive bidding among cloud providers and AI firms has since intensified.
Broader Implications
Policy specialists characterized the designation as unusually severe in a domestic context. Saif Khan, a former National Security Council official under Joe Biden, described it as potentially the most restrictive AI-related action taken by a U.S. government against an American company. Concerns about automated warfare have gained visibility as conflicts in Ukraine and Gaza feature increasingly autonomous systems. Former Project Maven director Jack Shanahan warned that removing corporate guardrails could heighten anxiety about due process and civilian harm.
Anthropic has pursued defense and intelligence contracts as it competes in a crowded AI market and considers a possible initial public offering. The company has said no final IPO decision has been made. Its models are already used across parts of the U.S. intelligence community, and it was among the first AI firms to handle classified information through a cloud arrangement with Amazon. A formal supply-chain risk label could therefore reverberate beyond federal procurement, potentially affecting private-sector partnerships as well.
An additional point of interest concerns the legal framework governing AI deployment in warfare. Current U.S. statutes do not explicitly prohibit combining disparate datasets to infer sensitive personal information, an issue Anthropic leadership has publicly raised. Rapid advances in direct model deployment within secure networks may outpace legislative updates, leaving courts to interpret boundaries. How this confrontation unfolds could shape the balance between federal authority and corporate influence in the governance of advanced AI systems.
