Pentagon Faces Pushback Over Claude AI Ban
Pete Hegseth, Pentagon
- Pentagon staff and contractors are resisting an order to remove Anthropic’s Claude from military systems.
- Users argue the tool outperforms alternatives and fear productivity losses during the transition.
- The dispute highlights tensions between rapid AI adoption and political oversight.
Internal Resistance to the Claude Phase‑Out
The Pentagon’s decision to phase out Anthropic’s Claude AI tools is encountering significant resistance from personnel who rely on the system for daily operations. Defense Secretary Pete Hegseth designated Anthropic a supply‑chain risk on March 3, triggering a six‑month removal period after disagreements over usage guardrails. Many staff members, contractors and former officials say they are reluctant to abandon Claude, which they view as more capable and reliable than competing models. Several users privately described the directive as disruptive and unnecessary.
Some IT specialists argue that the ban undermines progress made in integrating AI into military workflows. They note that Claude had become widely accepted among operators who had previously been hesitant to adopt new technologies. Others report that alternative models, including xAI’s Grok, often produce inconsistent results, making them less suitable for sensitive or time‑critical tasks. These concerns have led some users to delay compliance in hopes that the dispute will be resolved before the deadline.
Replacing Claude is expected to be a complex and time‑consuming process. Systems built around Anthropic’s tools must undergo recertification before they can operate on military networks, a procedure that can take months. Contractors warn that the transition could slow operations and introduce new risks if replacement tools are not fully vetted. Several individuals familiar with the matter say the Pentagon underestimated the scale of the disruption.
The Defense Department, Anthropic and xAI have not commented publicly on the situation. Many of those involved in the transition spoke anonymously because they were not authorized to discuss internal deliberations. Their accounts suggest that the phase‑out is progressing unevenly across departments. Some teams are complying strictly, while others are quietly preparing to revert to Claude if the ban is lifted.
Operational Impact Across Military Systems
Claude has become deeply embedded in U.S. military operations since Anthropic secured a $200 million defense contract in July 2025. The model was the first AI system approved for use on classified networks, and adoption grew rapidly as personnel integrated it into tasks ranging from intelligence analysis to software development. Officials familiar with its deployment say it played a role in U.S. operations during the conflict with Iran, underscoring its strategic importance. Despite the blacklisting, sources indicate that some units continue to use Claude in limited capacities.
The removal of Claude is already affecting day‑to‑day work. Tasks that were previously automated—such as querying large datasets—are now being performed manually with tools like Microsoft Excel. Developers who relied on Claude Code to generate and review software are particularly frustrated, as they must now rebuild workflows without the tool. One senior official acknowledged the inconvenience but argued that teams should avoid becoming dependent on a single AI system.
Replacing Claude in complex platforms will require substantial reengineering. Palantir’s Maven Smart Systems, which supports intelligence analysis and weapons targeting, uses multiple workflows built with Claude Code. According to individuals familiar with the system, Palantir will need to substitute Claude with another model and rebuild parts of its software. This process could take months and may temporarily reduce the platform’s efficiency.
Contractors say the Pentagon has instructed major defense firms to assess their reliance on Anthropic products and begin winding them down. Some organizations are choosing to move slowly, anticipating that the government and Anthropic may reach an agreement before the six‑month deadline. Others are evaluating whether to shift to OpenAI, Google or xAI, though each option requires its own certification process.
Strategic and Political Tensions
The dispute reflects broader tensions between rapid AI adoption and political oversight within the Defense Department. AI tools have become essential for tasks involving classified information, operational planning and data analysis. Removing a widely used system introduces uncertainty at a time when the military is increasingly dependent on automated support. Some officials worry that the transition could reduce productivity and slow decision‑making during critical operations.
Certification requirements add another layer of complexity. According to Joe Saunders, CEO of RunSafe Security, replacing an existing AI system with a new one can require 12 to 18 months of recertification. He notes that the process is costly and can significantly reduce productivity during the transition. These challenges have led some teams to delay the phase‑out in hopes of a policy reversal.
Political dynamics also play a role. Analysts say the dispute illustrates the tension between operational needs and policy decisions made at higher levels of government. Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute, described the situation as a clash between practical adoption and political considerations. His comments reflect a broader concern that rapid policy shifts may hinder long‑term modernization efforts.
Whether the Pentagon ultimately reinstates Anthropic remains uncertain. The outcome will depend on negotiations between the company and the government, as well as the willingness of military leaders to reconsider the supply‑chain designation. For now, the transition continues unevenly, with many users hoping for a resolution that allows them to return to the tools they prefer.
Anthropic’s Claude became the first AI model approved for use on classified U.S. military networks—a milestone that set it apart from competitors and accelerated its adoption across defense agencies. This early certification helped establish Claude as a trusted tool for sensitive operations, making its removal particularly disruptive for teams that built entire workflows around it.
