太子探花

Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

A top Pentagon official said Anthropic’s over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump’s future , which aims to put U.S. weapons in space.

U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said he came to view the AI company’s ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of , underwater vehicles and other machines to compete with rivals like China that could do the same.

鈥淚 need a reliable, steady partner that gives me something, that鈥檒l work with me on autonomous, because someday it鈥檒l be real and we鈥檙e starting to see earlier versions of that,” Michael said in a podcast aired Friday. “I need someone who鈥檚 not going to wig out in the middle.鈥

The comments came after the Pentagon San Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems.

Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors.

Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that鈥檚 deeply embedded in classified military systems, including those used in .

Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons.

Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the 鈥淎ll-In” podcast.

A fourth co-host, former PayPal executive David Sacks, is now Trump’s AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year.

As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he 鈥渉as a God-complex鈥 and 鈥渨ants nothing more than to try to personally control” the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI.

Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed.

鈥淭his is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome,鈥 Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile.

A human anti-missile operator 鈥渕ay not be able to discriminate with their own eyes what they鈥檙e going after,鈥 but an autonomous counterattack would be a low risk 鈥渂ecause it鈥檚 in space and you鈥檙e just trying to hit something that鈥檚 trying to get you.鈥

In another scenario, he said, 鈥渨ho could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?鈥

In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying 鈥淎nthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.鈥

Michael, the defense undersecretary for research and engineering, was sworn in last May and said he took over the military’s 鈥淎I portfolio鈥 in August. That’s when he said he began scrutinizing Anthropic’s contracts 鈥 some of which dated from President Joe Biden’s Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive.

鈥淚 need to have the terms of service be rational relative to our mission set,鈥 he said. 鈥淪o we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They鈥檙e like, 鈥極K, we鈥檒l give you an exception for that.鈥 Well, how about this drone swarm? 鈥榃e鈥檒l give an exception for that.鈥 And I was like, exceptions doesn鈥檛 work. I can鈥檛 predict for the next 20 years what (are) all the things we might use AI for.鈥

That’s when the Pentagon began insisting Anthropic and other AI companies allow for 鈥渁ll lawful use鈥 of their technology, Michael said.

, arguing that today’s leading AI systems 鈥渁re simply not reliable enough to power fully autonomous weapons.”

Its competitors 鈥 Google, OpenAI and Elon Musk’s xAI 鈥 agreed to the Pentagon’s terms, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans.

鈥淭hey didn鈥檛 want us to bulk-collect public information on people using their AI system,鈥 Michael said, describing the negotiations as 鈥渋nterminable.鈥

Anthropic has disputed parts of Michael’s version of the talks and emphasized that the protections it sought were narrow and not based on existing uses of Claude. The next stage of the dispute will likely happen in court.

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal 太子探花 Network Logo
Log in to your WTOP account for notifications and alerts customized for you.