There is something almost paradoxical about this week’s events in the AI industry. Anthropic was blacklisted from government contracts for insisting on ethical restrictions around autonomous weapons and surveillance. OpenAI then secured a government contract that contains — by its own CEO’s account — those very same restrictions. The difference in outcome demands explanation.
Anthropic had negotiated patiently with the Pentagon for months, offering wide-ranging support for military AI operations while drawing clear lines on two specific uses. Pentagon officials refused to accept those lines, and when Anthropic would not remove them, the Trump administration ordered a complete and immediate halt to all federal use of Anthropic technology.
President Trump’s statement on Truth Social was politically charged, casting Anthropic’s principled stand as ideologically motivated and constitutionally suspect. The language was designed to discredit the company’s motives rather than engage its arguments on the merits.
Sam Altman announced OpenAI’s Pentagon deal just hours later, stating clearly that the company had secured commitments against mass surveillance and autonomous weapons use. He went further, calling on the Pentagon to apply these same terms to all AI companies — a position that sounded remarkably like what Anthropic had been arguing all along.
Meanwhile, inside the industry, the solidarity message was strong. Hundreds of OpenAI and Google employees had already signed a letter warning about Pentagon efforts to divide AI companies against each other. Anthropic, no longer constrained by ongoing negotiations, issued a statement that was as clear as it was unapologetic: its ethical principles are permanent, not political, and no punishment will change them.