Microsoft has taken the extraordinary step of challenging the Pentagon in federal court by filing a supporting legal brief for Anthropic in its battle against a supply-chain risk designation that the AI company says threatens its business and its principles. The brief, submitted to a federal court in San Francisco, urged the court to issue a temporary restraining order against the Defense Department’s designation. The tech giant was joined by Amazon, Google, Apple, and OpenAI in backing Anthropic through coordinated court filings.
The Pentagon’s designation stemmed from Anthropic’s refusal during a $200 million contract negotiation to allow its AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, a designation that led to the cancellation of Anthropic’s existing government contracts and effectively barred it from future federal work. The company filed two simultaneous lawsuits challenging the designation as unconstitutional and unprecedented.
Microsoft’s participation in this legal battle is grounded in the fact that it uses Anthropic’s technology in military systems it provides to the federal government. As a partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and holder of additional federal agreements, Microsoft is directly affected by any restrictions on Anthropic’s ability to operate in the government market. The company publicly called for government and industry collaboration to harness advanced AI responsibly while preventing its misuse for surveillance or uncontrolled warfare.
Anthropic argued in its court filings that the supply-chain risk designation, normally applied to companies with links to adversarial nations, was being weaponized as ideological punishment for its publicly held views on AI safety. The company disclosed that it does not believe Claude is currently safe or reliable enough for lethal autonomous operations, which it said was the technical reality behind its contract demands. The Pentagon’s technology chief publicly confirmed that renegotiation was not being considered.
House Democrats have written to the Pentagon demanding information about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their inquiries are asking specifically about the role of AI in targeting decisions and the extent of human oversight. The simultaneous legal and legislative pressure on the Pentagon is unprecedented and may ultimately force the government to establish clear public rules governing the use of artificial intelligence in military operations.