
After what has been widely described as a chaotic, politically charged, and deeply controversial week for his company, Anthropic CEO Dario Amodei is reportedly making yet another attempt to reopen dialogue and rebuild a working relationship with officials at the U.S. Department of Defense. According to several anonymous sources who shared information with the Financial Times, Amodei has quietly returned to the negotiating table in hopes of resolving the escalating dispute between Anthropic and the Pentagon and potentially salvaging some form of cooperation.
The situation is the result of a complicated series of events that unfolded over the past several days. In the lead-up to the United States’ military confrontation with Iran, Anthropic had reportedly been engaged in sensitive discussions with Secretary of Defense Pete Hegseth and other senior officials regarding whether the company’s AI technology could be used in certain military and intelligence applications. At the center of those conversations was Claude, Anthropic’s flagship artificial intelligence system.
Government officials were reportedly interested in determining whether Claude could assist with large-scale surveillance operations, intelligence analysis, or even systems related to autonomous weapons and battlefield decision-making. These kinds of uses raise major ethical and political questions within the artificial intelligence industry, and Anthropic was said to be cautious about agreeing to them without clear boundaries.
The negotiations quickly became tense. From the Pentagon’s perspective, Anthropic’s hesitation and reluctance to fully commit to military uses of its AI technology may have been interpreted as resistance or even a challenge to government authority. Some officials allegedly reacted as though Amodei and his company were attempting to dictate terms to the U.S. military or interfere with national security priorities established by the Trump administration.
As a result, what might have started as a routine government–technology partnership discussion reportedly turned into a deeply uncomfortable and adversarial exchange. Observers described the entire situation as unusually awkward, with both sides becoming increasingly defensive and suspicious of the other’s motives.
Secretary Hegseth ultimately responded in a particularly dramatic fashion. According to reports, he declared that Anthropic could be classified as a supply-chain security risk, a designation that would have serious consequences for the company’s ability to do business with government contractors and defense partners. Hegseth also reportedly suggested that companies holding U.S. government contracts might soon be prohibited from working with Anthropic entirely.
The proposal was controversial not only because of its severity but also because of its legal ambiguity. Critics argued that the move could be difficult to justify under existing procurement laws and regulations governing government contracting. Despite that, the announcement signaled that relations between Anthropic and the Pentagon had deteriorated dramatically.
Adding another layer of irony to the situation, the restriction was reportedly not meant to take effect immediately. Instead, officials indicated that any such ban would begin roughly six months in the future. According to the Financial Times, this delay may have been partly due to the fact that Pentagon personnel were still actively using Anthropic’s Claude AI system during that same period. In fact, some reports suggested that Claude was being used internally to assist with planning and preparation related to a U.S. bombing campaign targeting Iran.
While Anthropic’s relationship with the Pentagon was deteriorating, one of its largest competitors was moving quickly in the opposite direction. OpenAI, the creator of ChatGPT and one of the most prominent companies in the artificial intelligence sector, reportedly finalized a new agreement with the Department of Defense that allowed its technology to be used within classified government networks and sensitive national security environments.
Only hours after that agreement was reportedly completed, U.S. military strikes were carried out against targets in Iran, intensifying global attention on the role artificial intelligence might play in modern warfare and defense operations.
Now, roughly a week after the initial controversy erupted, it appears that the conflict may be entering a new phase. According to reporting from the Financial Times, Amodei has resumed conversations with Pentagon leadership in an effort to ease tensions and possibly negotiate a compromise. This time, the talks are said to involve Emil Michael, the Pentagon’s undersecretary of defense for research and engineering.
Michael has previously been sharply critical of Amodei. In earlier remarks, he reportedly accused the Anthropic CEO of being dishonest and even suggested that Amodei suffers from what he described as a “God complex.” Because of those harsh comments, the fact that the two sides are now communicating again has attracted considerable attention within the technology and defense policy communities.
Further insight into Amodei’s perspective emerged earlier on Wednesday through an internal memo reportedly sent to Anthropic employees. The message, which was later described in media reports, contained a lengthy and somewhat rambling explanation of how Amodei views Anthropic’s position in the broader political and technological landscape.
In the memo, Amodei contrasted his company’s approach to government relations with that of OpenAI CEO Sam Altman, implying that the two companies have taken very different paths when dealing with political leaders and defense officials.
According to the reported text of the memo, Amodei emphasized that Anthropic has not engaged in what he described as “dictator-style praise” for former President Donald Trump, suggesting that other figures in the technology industry may have taken a more accommodating approach. He also pointed out that Anthropic has consistently supported stronger government regulation of artificial intelligence, a policy position that he suggested conflicts with the priorities of some current policymakers.
Amodei also stressed that his company has tried to be transparent about the potential social and economic consequences of rapidly advancing AI systems. In particular, he referenced the issue of job displacement, which many experts believe could become a significant challenge as automation and machine learning technologies continue to evolve.
Another major theme in the memo involved what Amodei described as maintaining ethical boundaries. He claimed that Anthropic has attempted to uphold certain “red lines” regarding how its technology should be used, even when doing so may have complicated negotiations with powerful government partners.
In contrast, Amodei suggested that some organizations may engage in what he called “safety theater.” In this context, the phrase refers to public displays of concern about AI safety that may be designed more to reassure employees or the public than to produce meaningful safeguards.
According to the memo, individuals inside the Department of Defense, as well as consultants and contractors associated with military technology companies such as Palantir, had assumed that Anthropic’s goal was to participate in that kind of performative safety process. Amodei, however, insisted that the company’s position was more serious and principled than that assumption.
The controversy surrounding the dispute has begun to ripple outward across the broader technology sector. Earlier on Wednesday, the Information Technology Industry Council, a major trade organization representing some of the world’s largest technology companies, issued a public statement addressing the situation.
The council—which includes companies such as Nvidia, Amazon, Apple, and OpenAI among its members—said it was “concerned by recent reports” describing a conflict between a major technology firm and the U.S. Department of Defense. Although the statement did not mention Anthropic by name, many observers interpreted it as a response to the ongoing situation.
Industry leaders worry that escalating conflicts between the government and AI developers could create uncertainty about how emerging technologies will be regulated, deployed, and integrated into national security systems in the future.
For now, the exact status of the negotiations remains unclear. Gizmodo has reportedly contacted Anthropic directly in an effort to confirm whether the new round of talks with Pentagon officials is actually taking place and to request further details about what those discussions might involve.
As of the latest reports, the company has not publicly commented on the matter. However, journalists say they plan to update their coverage if Anthropic responds or if additional information about the negotiations becomes available.