It’s like a poorly performed stand-up routine, trying and failing to land a punchline. Oh, you might ask, what the hell am I talking about? Excellent point.
Observe:
You’re in the middle of what is perhaps the most important arms race of the century. Your top domestic AI company — the first one trusted with classified military networks — signs a $200 million contract with your “defence” department. They ask for two conditions: don’t use this to build fully autonomous weapons, and don’t use it for mass domestic surveillance of the population. You say no to the conditions and ban the company instead.
This is how the U.S. military’s been moving in the year of our Lord 2026.
In February, the Pentagon decided to sever its relationship with Anthropic, the company behind the AI assistant Claude, over the company’s refusal to sign an “all lawful purposes” contract that would remove all restrictions on military use. U.S. Secretary of War Pete Hegseth declared that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic. But this is a designation that has historically only been used for foreign adversaries, not domestic companies. Especially a domestic company that I’m sure was (until recently) considered a crown jewel of American AI.
And to me, the logic immediately falls apart here. The decision seems puzzling, given that the administration had previously threatened to invoke the Defense Production Act to force Anthropic’s compliance, a tool used to compel companies to serve national security needs. You cannot simultaneously argue something is so necessary it requires the Defense Production Act, and so dangerous it warrants a foreign adversary designation. Make it make sense.
Then, Claude was used in U.S. military operations against Iran, after the ban was announced. Way to stick to your — and pardon me here — guns.
Now, Anthropic isn’t innocent either (I’d argue no AI company ever will be, but regardless). The framing of their stance as some brave moral line needs unpacking. Firstly, those surveillance restrictions only cover American citizens; there’s nothing in the policy about anyone else. Which makes sense as an American company… signing with the American military… in the name of American best interest. But if that’s the case, doesn’t the whole point of taking a moral stance kind of fall apart if it’s selective? Anthropic somehow presents itself as governed by firm ethical constraints, yet available to do almost anything the Pentagon needs outside of them, regardless of ethics.
The idea that this is a company with deep reservations about military applications can be tested, and it doesn’t seem to hold water. Sure, they were the first on classified networks, embedded with Palantir, already marketing Claude as a national security tool before this dispute started, with CEO Dario Amodei writing publicly about the existential importance of using AI to defend the United States and other democracies. But in the end, really, they had two specific conditions that the Pentagon didn’t like — which to me reads differently than meaningfully opposing military demands, even if the press has largely treated it as such.
So, both sides don’t abide by any real principles, and in the clusterfuck of it all, capitalism swoops in to secure relations with the government. OpenAI, Google, and xAI have all agreed to lift standard guardrails for Pentagon use, accepting broad terms with no hard restrictions.
As the industry walks its safety commitments back, Anthropic isn’t the only one getting burned. Other consequences could include Palantir’s Maven Smart System, a weapons-targeting platform with more than $1 billion in Pentagon contracts, facing a costly and time-consuming rebuild because it was built in part using Anthropic’s Claude. And whether or not you think Anthropic’s two conditions were reasonable, this response does not work as a serious strategic position. To me it’s clear it’s a pressure campaign dressed up as policy, and the people who will feel it most aren’t executives, they’re analysts and contractors who actually used these tools to get things done other than just earning a buck off it all.
AI may have many genuine problems, including moral and ethical dilemmas and environmental impact, but even just as a data analysis tool, there is no closing Pandora’s box. It’s a part of a technology arms race that can’t just be ignored. Learning to navigate AI legislation, ethics, and praxis (or even just establish them in the first place) is an ongoing battle — one you don’t win by shooting yourself in the foot to prove a point.

