Friday, February 27, 2026

Anthropic Takes a Stand – The Atlantic

That is an version of The Atlantic Day by day, a e-newsletter that guides you thru the most important tales of the day, helps you uncover new concepts, and recommends the perfect in tradition. Join it right here.

Earlier this week, Secretary of Protection Pete Hegseth sat down with Dario Amodei, the CEO of the main AI agency Anthropic, for a dialog about ethics. The Pentagon had been utilizing the corporate’s flagship product, Claude, for months as a part of a $200 million contract—the AI had even reportedly performed a task within the January mission to seize Venezuelan President Nicolás Maduro—however Hegseth wasn’t happy. There have been sure issues Claude simply wouldn’t do.

That’s as a result of Anthropic had instilled in it sure restrictions. The Pentagon’s model of Claude couldn’t be used to facilitate the mass surveillance of Individuals, nor might it’s utilized in totally autonomous weaponry—conditions the place computer systems, moderately than people, make the ultimate resolution about whom to kill. In response to a supply conversant in this week’s assembly, Hegseth made clear that if Anthropic didn’t get rid of these two guardrails by Friday afternoon, two issues might occur: The Division of Protection might use the Protection Manufacturing Act, a Chilly Warfare–period regulation, to primarily commandeer a extra permissive iteration of the AI, or it might label Anthropic a “supply-chain threat,” which means that anybody doing enterprise with the U.S. navy could be forbidden from associating with the corporate. (This penalty is often reserved for international companies corresponding to China’s Huawei and ZTE.)

This night, Anthropic mentioned in a public assertion that it “can’t in good conscience accede” to the Pentagon’s request. What occurs subsequent might mark an important second for the corporate, and for the American authorities’s method to AI regulation extra broadly. In refusing to bow to an administration that has been intent on bullying personal firms into submission, Amodei and his staff are taking a daring stand on moral grounds, and risking a censure that would erode Anthropic’s long-term viability.

Through the first 12 months of Donald Trump’s second time period, the White Home had a extra relaxed angle towards AI regulation; an AI Motion Plan from July stresses that the administration will “proceed to reject radical local weather dogma and bureaucratic crimson tape” to encourage innovation. Hegseth is now, in impact, threatening to partially nationalize one of many greatest AI gamers within the personal sector—and pressure the corporate to go in opposition to its personal rules. “That is essentially the most aggressive AI regulatory transfer I’ve ever seen, by any authorities wherever on the planet,” Dean Ball, who helped write among the Trump administration’s AI insurance policies, instructed me.

The Pentagon has already reportedly been reaching out to different protection contractors to see in the event that they’re linked to Anthropic, an indication that officers are making ready to designate the corporate a supply-chain threat. Now that Anthropic has defied Hegseth, the contract is probably going in peril. The agency doesn’t really want the $200 million—it reportedly pulls in $14 billion a 12 months, and it mentioned it raised $30 billion in enterprise capital simply weeks in the past—however being blacklisted might have an effect on its skill to scale up sooner or later. (“We’re not strolling away from negotiations,” an Anthropic spokesperson instructed The Atlantic in a press release. “We proceed to interact in good religion with the Division on a means ahead.” The Pentagon instructed CBS on Tuesday that “this has nothing to do with mass surveillance and autonomous weapons getting used,” and that ”the Pentagon has solely given out lawful orders.”)

As AI companies all over the world jockey for dominance, Anthropic has distinguished itself by emphasizing security. OpenAI’s ChatGPT has been criticized for taking part in up some customers’ delusions, resulting in instances of “AI psychosis,” and simply final month, xAI’s Grok was spinning up almost nude photos of just about anybody with out consent. (xAI has mentioned it’s limiting Grok from producing these sorts of photos, and OpenAI has mentioned it’s working to make ChatGPT higher help folks in misery.) In the meantime, Anthropic’s consumer-facing chatbot doesn’t generate photos in any respect. By refusing to cave to authorities strain, it could have simply averted one other disaster: a serious public backlash from shoppers, a few of whom see the corporate as a extra principled participant within the AI wars. Anthropic lately confronted some pushback over altering its insurance policies—Time reported on Tuesday that, in a seemingly unrelated transfer, the corporate dropped a core security pledge regarding its broader method to AI growth.

Weeks earlier than Hegseth issued his ultimatum, Amodei opined on his web site about the dangers concerned with exactly the 2 guardrails the Pentagon is concentrating on. “In some instances,” he wrote, “large-scale surveillance with highly effective AI, mass propaganda with highly effective AI, and sure varieties of offensive makes use of of totally autonomous weapons must be thought of crimes in opposition to humanity.”

The Trump administration doesn’t appear to know what it needs from AI. On one hand, it’s deeply suspicious of sure sorts of fashions. The White Home’s designated AI czar, David Sacks, has criticized Anthropic for “operating a complicated regulatory seize technique based mostly on fear-mongering,” primarily accusing the agency of pushing for pointless, innovation-squashing limitations and jeopardizing the way forward for American tech. The administration has additionally criticized AI bots for typically spitting out “woke” replies. Alternatively, Claude is outwardly priceless sufficient that it’s on the cusp of being commandeered by the federal authorities.

Ball instructed me that the Division of Protection might have a degree—that there’s an argument to be made about reining in Silicon Valley’s management over the federal government’s use of recent applied sciences. Though the focus of energy among the many technocratic elite is actually troubling, Hegseth’s proposed punishments for Anthropic are misguided and plainly contradictory. The Protection Manufacturing Act does permit the federal government to intervene in home industries within the curiosity of nationwide safety (the Biden administration invoked it in a 2023 government order on AI regulation). However is Claude so necessary for U.S. nationwide safety that the federal government must compel Anthropic to create an untethered new model? Or is it so harmful that it must be shunned—not simply by the Pentagon, however by any enterprise linked to the navy? A 3rd, even-more-bewildering choice can also be on the desk: Hegseth might resolve to concurrently fee a modified Claude and sanction the corporate that stewards it.

All of this ignores a a lot easier resolution: Hegseth might simply begin a partnership with a distinct agency. It’s a great time for his division to be in enterprise with tech, because the temper of Silicon Valley has currently change into way more Pentagon-friendly. Palantir’s Alex Karp has touted that his software program is used “to scare our enemies and, now and again, kill them”; the technologist and entrepreneur Palmer Luckey is already constructing autonomous weaponry for the federal government; and Andreessen Horowitz’s American Dynamism funds are serving to funnel the nation’s high younger minds into protection tech. However moderately than look elsewhere, Hegseth is threatening to crush Anthropic—implying that if he can’t management Claude, nobody can.

Because the protection secretary seems to be to make an instance of the corporate, he’s taking a cue from Trump, who has used authorized and extralegal strain to successfully pressure different personal companies, significantly massive regulation companies, banks, and universities, into submission. These acts of coercion have the potential to reshape American capitalism: We’re starting to see a market the place winners and losers are determined much less by the standard of their merchandise and extra by their seeming fealty to the White Home. How that can have an effect on the success of companies and the economic system is unsure.

The Pentagon created this ultimatum exactly as a result of it understands Anthropic’s world-altering potential. The administration simply can’t resolve if it’s an asset, a legal responsibility, or each.

Associated:


Listed here are three new tales from The Atlantic:


At the moment’s Information

  1. A Columbia College scholar detained this morning by federal immigration brokers has been launched. The arresting officers reportedly misrepresented themselves as in search of a lacking little one with a view to achieve entry to the scholar’s residential constructing.
  2. Hillary Clinton instructed the Home Oversight Committee that she has no new details about Jeffrey Epstein and maintained that she had no data of his crimes; she criticized congressional Republicans’ dealing with of the probe as partisan. Invoice Clinton is scheduled to provide his deposition tomorrow.
  3. Cuban forces killed 4 folks and wounded six after firing on a Florida-registered speedboat that Cuban authorities say entered the nation’s waters yesterday and opened hearth on a patrol vessel. Cuba claims that the U.S.-based passengers have been armed and planning a “terrorist” infiltration.

Dispatches

Discover all of our newsletters right here.


Extra From The Atlantic


Night Learn

Illustration of a UFO above a forest scene, pulling three large dice with a tractor beam
Illustration by The Atlantic

This Appears Like an Insider Wager on Aliens

By Ross Andersen

On Monday evening, somebody positioned a peculiar guess on the prediction market Kalshi. At 7:45 p.m. jap time, a single dealer put down almost $100,000 on the declare that, by the top of December, the Trump administration will affirm that alien life or expertise exists elsewhere in our universe. In response to The Atlantic’s evaluate of Kalshi’s buying and selling knowledge, about 35 minutes after this guess was executed, it was adopted by one other that was nearly twice as giant (probably from the identical individual). These have been market-moving occasions: For one temporary stretch, the market appeared to assume that there was no less than a one-in-three likelihood that the U.S. authorities will announce the existence of aliens this 12 months. Maybe this was just a few overexcited UFO diehard with a hunch and cash to burn. Or possibly, as some observers shortly famous, it was a dealer with inside data.

Learn the total article.


Tradition Break

Illustration of a rumpled bed on the blank page of a book
Illustration by Alisa Gao / The Atlantic

Discover. When did literature get much less soiled? A puritan pressure is manifesting in realist novels as a marked absence of straight intercourse, Lily Meyer writes.

Learn. Casey Schwartz on two new books that show how Martha Gellhorn, Janet Flanner, and different feminine reporters took journalism in instructions that males couldn’t.

Play our every day crossword.

Rafaela Jinich contributed to this text.

If you purchase a e book utilizing a hyperlink on this e-newsletter, we obtain a fee. Thanks for supporting The Atlantic.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles