Microsoft has marked a new chapter in the relationship between the AI industry and the US military by filing a historic court brief in a San Francisco federal court supporting Anthropic’s challenge to the Pentagon’s supply-chain risk designation. The brief called for a temporary restraining order and argued that the designation threatens the technology networks critical to national defense. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, making this a collectively historic moment for the technology industry.
The new chapter began when Anthropic refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation with the Pentagon. Defense Secretary Pete Hegseth applied the supply-chain risk designation after talks broke down, and the Pentagon’s technology chief publicly ruled out any renegotiation. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s historic brief is grounded in its direct use of Anthropic’s technology in military systems and its participation in the Pentagon’s $9 billion cloud computing contract. Additional agreements with defense, intelligence, and civilian agencies deepen Microsoft’s stake in this dispute. Microsoft publicly argued that the government and the technology sector must work together to ensure advanced AI serves national security without crossing ethical lines related to surveillance or autonomous warfare.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not believe Claude is currently safe or reliable enough for lethal autonomous operations. The Pentagon’s technology chief publicly ruled out any possibility of renewed negotiations.
Congressional Democrats have separately asked the Pentagon whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school, raising questions about AI targeting and human oversight. Their formal inquiries are adding legislative pressure to a confrontation that is already marking a new chapter in the history of AI and the US military. Microsoft’s historic brief may ultimately be seen as the document that defined the terms of that new chapter.
