The legal challenge intensifies an unusually public dispute over how AI can be used in warfare and mass surveillance — one that has also dragged in Anthropic's tech industry rivals, particularly ChatGPT maker OpenAI, which made its own deal to work with the Pentagon just hours after the government punished Anthropic for its stance.
Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the government's actions against the San Francisco-based company.
“These actions are unprecedented and unlawful," Anthropic's lawsuit says. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
The Defense Department declined to comment Monday, citing a policy of not commenting on matters in litigation.
Anthropic said it sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other officials publicly insisted the company must accept “all lawful" uses of Claude and threatened punishment if Anthropic did not comply.
Designating the company a supply chain risk cuts off Anthropic's defense work using an authority that was designed to prevent foreign adversaries from harming national security systems. It was the first time the federal government is known to have used the designation against a U.S. company. Hegseth said in a March 4 letter to Anthropic that it was “necessary to protect national security,” according to Anthropic's lawsuit.
President Donald Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to phase out a product that’s deeply embedded in classified military systems, including those used in the Iran war.
Anthropic's lawsuit also names other federal agencies, including the departments of Treasury and State, after officials ordered employees to stop using Anthropic’s services.
Even as it fights the Pentagon’s actions, Anthropic has sought to convince businesses and other government agencies that the Trump administration’s supply chain risk designation is a narrow one that only affects military contractors when they are using Claude in work for the Department of Defense.
Making that distinction clear is crucial for the privately held Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a recent investment announcement that valued the company at $380 billion.
Anthropic said in a statement Monday that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners."
The lawsuit positions AI safety and "positive outcomes for humanity” as critical to Anthropic's mission since it was founded in 2021 by CEO Dario Amodei and six other former OpenAI employees.
Its usage policy has “always prohibited the use of Anthropic’s services for lethal autonomous warfare without human oversight and surveillance of Americans en masse,” the company said in its lawsuit. Anthropic said it has never tested Claude on those applications and doesn't have the confidence its products could “function reliably or safely if used to support lethal autonomous warfare.”
At the same time, it allowed the military to use Claude in ways that civilians could not, including military operations and in analyzing “lawfully collected foreign intelligence information.”
Until recently, Anthropic was the only of its tech industry peers approved to supply its AI model to classified military systems. The dispute has led the Pentagon to look to shift Claude's work to Google's Gemini, OpenAI's ChatGPT and Elon Musk's Grok.
Anthropic's lawsuit alleges the Trump administration's actions are impugning its reputation, “jeopardizing hundreds of millions of dollars” in contracts with other businesses and attempting to “destroy the economic value created by one of the world’s fastest-growing private companies.”
Conversely, the fight has also boosted Anthropic's reputation among some customers and tech workers who are siding with the company's refusal to budge to pressure from the Trump administration. Amodei's moral stance was further distinguished when his bitter rival, OpenAI CEO Sam Altman, sought to replace Pentagon's Claude with ChatGPT in a move Altman later admitted was rushed and seemed opportunistic.
Consumer downloads of Claude surged, lifting its popularity for the first time over better-known ChatGPT and Gemini.
The controversy also continues to have repercussions in the competition to retain AI industry talent, leading to the resignation of OpenAI's head of robotics, Caitlin Kalinowski.
“This wasn't an easy call, " Kalinowski wrote on social media over the weekend. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
