Microsoft has made its position unmistakably clear by filing a court brief in support of Anthropic’s challenge to the Pentagon’s supply-chain risk designation, signaling that the technology industry will not stand aside as the military attempts to dictate the terms on which AI companies operate. The brief was filed in a San Francisco federal court and called for a temporary restraining order to halt the effects of the Pentagon’s designation. Alongside Microsoft, Amazon, Google, Apple, and OpenAI have also formally backed Anthropic through their own court filing.
The conflict that led to this legal showdown began with a failed $200 million contract negotiation in which Anthropic insisted its AI must not be used for mass surveillance of Americans or for autonomous lethal weapons. When the Pentagon rejected these conditions and Defense Secretary Pete Hegseth applied the supply-chain risk label, Anthropic responded by filing two simultaneous lawsuits, one in California and one in Washington DC. The company argued the designation was both legally improper and constitutionally impermissible.
Microsoft’s involvement is grounded in practical necessity: the company integrates Anthropic’s AI into military systems it provides to the federal government and has billions of dollars in Pentagon contracts at stake. As a participant in the $9 billion Joint Warfighting Cloud Capability contract, Microsoft cannot afford to see one of its key AI suppliers effectively removed from the government market. The company publicly argued that reliable access to top AI technology and responsible governance were goals that required public-private collaboration.
Anthropic’s court filings argued that the supply-chain risk label, traditionally reserved for firms with ties to adversarial nations, was being applied as ideological punishment for the company’s publicly stated safety positions. The company revealed that it does not believe Claude is currently safe or reliable enough for lethal autonomous operations, which it said was the real basis for its contract demands. The Pentagon’s technology chief publicly foreclosed the possibility of renewed negotiations, further hardening the confrontation.
Congressional Democrats have separately sent letters to the Pentagon demanding information about whether AI played a role in a strike in Iran that reportedly killed more than 175 people at an elementary school. The questions about AI in military targeting being asked on Capitol Hill parallel those being litigated in the federal courts. Together, these developments are forcing a reckoning with the fundamental question of how much control AI companies should have over how their technology is used in war.
Microsoft’s Legal Intervention Spotlights the Growing Conflict Between AI Ethics and Pentagon Authority
27