Anthropic and the Pentagon – Schneier on Safety – Cyber Tech

Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a provider of AI expertise for the US protection division. This information caps per week of bluster by the very best officers within the US authorities in the direction of a number of the wealthiest titans of the massive tech business, and the overhanging specter of the existential dangers posed by a brand new expertise highly effective sufficient that the Pentagon claims it’s important to nationwide safety. At challenge is Anthropic’s insistence that the US Division of Protection (DoD) couldn’t use its fashions to facilitate “mass surveillance” or “totally autonomous weapons,” provisions the protection secretary Pete Hegseth derided as “woke.”

All of it got here to a head on Friday night when Donald Trump issued an order for federal authorities companies to discontinue use of Anthropic fashions. Inside hours, OpenAI had swooped in, doubtlessly seizing a whole bunch of tens of millions of {dollars} in authorities contracts by hanging an settlement with the administration to offer categorised authorities techniques with AI.

Regardless of the histrionics, that is most likely the very best final result for Anthropic—and for the Pentagon. In our free-market financial system, each are, and needs to be, free to promote and purchase what they need with whom they need, topic to longstanding federal guidelines on contracting, acquisitions, and blacklisting. The one issue misplaced listed below are the Pentagon’s vindictive threats.

AI fashions are more and more commodified. The highest-tier choices have about the identical efficiency, and there’s little to distinguish one from the opposite. The newest fashions from Anthropic, OpenAI and Google, particularly, are likely to leapfrog one another with minor hops ahead in high quality each few months. The most effective fashions from one supplier are usually most well-liked by customers to the second, or third, or tenth greatest fashions at a fee of solely about six occasions out of 10, a digital tie.

On this form of market, branding issues rather a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves because the ethical and reliable AI supplier. That has market worth for each customers and enterprise purchasers. In taking Anthropic’s place in authorities contracting, OpenAI’s CEO, Sam Altman, vowed to someway uphold the identical security rules Anthropic had simply been pilloried for. How that’s potential given the rhetoric of Hegseth and Trump is solely unclear, however appears sure to additional politicize OpenAI and its merchandise within the minds of customers and company patrons.

Posturing publicly in opposition to the Pentagon and as a hero to civil libertarians is sort of probably value the price of the misplaced contracts to Anthropic, and associating themselves with the identical contracts might be a lure for OpenAI. The Pentagon, in the meantime, has loads of choices. Even when no large tech firm was prepared to provide it with AI, the division has already deployed dozens of open weight fashions—whose parameters are public and are sometimes licensed permissively for presidency use.

We are able to admire Amodei’s stance, however, to make sure, it’s primarily posturing. Anthropic knew what they have been moving into after they agreed to a protection division partnership for $200m final yr. And after they signed a partnership with the surveillance firm Palantir in 2024.

Learn Amodei’s assertion concerning the challenge. Or his January essay on AIs and danger, the place he repeatedly makes use of the phrases “democracy” and “autocracy” whereas evading exactly how collaboration with US federal companies needs to be seen on this second. Amodei has purchased into the thought of utilizing “AI to attain strong navy superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady imaginative and prescient. However it’s a imaginative and prescient that likewise supposes that the world’s nominal democracies are dedicated to a standard imaginative and prescient of public wellbeing, peace-seeking and democratic management.

Regardless, the protection division also can fairly demand that the AI merchandise it purchases meet its wants. The Pentagon is just not a standard buyer; it buys merchandise that kill folks on a regular basis. Tanks, artillery items, and hand grenades are usually not merchandise with moral guard rails. The Pentagon’s wants fairly contain weapons of deadly drive, and people weapons are persevering with on a gentle, if doubtlessly catastrophic, path of accelerating automation.

So, on the floor, this dispute is a standard market give and take. The Pentagon has distinctive necessities for the merchandise it makes use of. Corporations can determine whether or not or to not meet them, and at what worth. After which the Pentagon can determine from whom to amass these merchandise. Seems like a standard day on the procurement workplace.

However, in fact, that is the Trump administration, so it doesn’t cease there. Hegseth has threatened Anthropic not simply with lack of authorities contracts. The administration has, at the very least till the inevitable lawsuits drive the courts to kind issues out, designated the corporate as “a supply-chain danger to nationwide safety,” a designation beforehand solely ever utilized to overseas firms. This prevents not solely authorities companies, but in addition their very own contractors and suppliers, from contracting with Anthropic.

The federal government has incompatibly additionally threatened to invoke the Protection Manufacturing Act, which might drive Anthropic to take away contractual provisions the division had beforehand agreed to, or maybe to basically modify its AI fashions to take away in-built security guardrails. The federal government’s calls for, Anthropic’s response, and the authorized context wherein they’re performing will undoubtedly all change over the approaching weeks.

However, alarmingly, autonomous weapons techniques are right here to remain. Primitive pit traps advanced to mechanical bear traps. The world continues to be debating the moral use of, and coping with the legacy of, land mines. The US Phalanx CIWS is a Eighties-era shipboard anti-missile system with a totally autonomous, radar-guided cannon. Right now’s navy drones can search, establish and interact targets with out direct human intervention. AI might be used for navy functions, simply as each different expertise our species has invented has.

The lesson right here shouldn’t be that one firm in our rapacious capitalist system is extra ethical than one other, or that one company hero can stand in the way in which of presidency’s adopting AI as applied sciences of struggle, or surveillance, or repression. Sadly, we don’t dwell in a world the place such obstacles are everlasting and even notably sturdy.

As an alternative, the lesson is concerning the significance of democratic constructions and the pressing want for his or her renovation within the US. If the protection division is demanding the usage of AI for mass surveillance or autonomous warfare that we, the general public, discover unacceptable, that ought to inform us we have to go new authorized restrictions on these navy actions. If we’re uncomfortable with the drive of presidency being utilized to dictate how and when firms yield to unsafe functions of their merchandise, we should always strengthen the authorized protections round authorities procurement.

The Pentagon ought to maximize its warfighting capabilities, topic to the legislation. And personal firms like Anthropic ought to posture to achieve shopper and purchaser confidence. However we should always not relaxation on our laurels, considering that both is doing so within the public’s curiosity.

This essay was written with Nathan E. Sanders, and initially appeared in The Guardian.

Posted on March 6, 2026 at 12:07 PM •
1 Feedback

Add a Comment

Your email address will not be published. Required fields are marked *

x