
Anthropic’s intensifying legal dispute with the United States federal government is attracting growing attention across the global artificial intelligence industry. In a significant new development, dozens of AI professionals from competing technology companies have submitted an amicus curiae brief supporting Anthropic’s legal challenge against the government’s controversial decision to classify the company as a “supply chain risk to national security.”
The filing highlights a rare moment of unity within the fiercely competitive AI ecosystem. Engineers, researchers, and scientists from Google and OpenAI—two companies widely viewed as direct competitors to Anthropic—have stepped forward to publicly support the startup’s legal arguments. Their involvement underscores the broader implications of the case for the future of AI governance, innovation, and industry regulation.
Anthropic Files Lawsuits Challenging “National Security Risk” Label
Earlier this week, Anthropic filed two separate lawsuits contesting the federal government’s authority to designate the company as a potential national security threat. The AI startup argues that the classification is both legally unjustified and economically damaging, effectively preventing major technology firms, defense contractors, and enterprise clients from entering partnerships or business agreements with the company.
Anthropic claims the designation could significantly disrupt its operations and hinder its ability to compete in the rapidly expanding artificial intelligence market. By restricting collaboration opportunities with major institutions, the decision could also reshape the competitive landscape of the AI industry.
Legal experts note that such a designation carries far-reaching consequences. Being labeled a “supply chain risk” can deter corporations, government agencies, and international partners from engaging with the affected company, potentially limiting investment, access to infrastructure, and large-scale commercial deployments of AI technology.
AI Researchers From Google and OpenAI File Amicus Brief
The amicus brief, filed Monday, includes 37 signatories, collectively described as engineers, researchers, scientists, and technology professionals currently working within the artificial intelligence divisions of Google and OpenAI.
In legal proceedings, an amicus curiae—Latin for “friend of the court”—is a person or organization that is not a direct party to the case but provides expert insight or perspective intended to assist the court in reaching an informed decision.
What makes this filing particularly noteworthy is the identity of the individuals supporting Anthropic. The signatories come from organizations that are actively competing with Anthropic in the race to develop advanced AI models and infrastructure, making their public support highly unusual in such a competitive sector.
Among the most prominent names included in the brief is Jeff Dean, one of the most influential figures in modern computer science. Dean serves as chief scientist at Google, where he oversees many of the company’s most advanced artificial intelligence research initiatives. Widely respected in the technology community, Dean has played a critical role in shaping the development of large-scale machine learning systems and AI infrastructure across the industry.
Beyond his role at Google, Dean is also known for supporting emerging AI ventures and contributing to broader innovation in the field. His presence among the signatories adds significant credibility and visibility to the brief.
The Core Arguments Behind the AI Industry’s Support
The amicus brief presents three primary arguments defending Anthropic’s actions and criticizing the federal government’s response.
First, the AI professionals argue that Anthropic acted responsibly in maintaining strict ethical boundaries—often referred to within the company as “red lines”—regarding the development and deployment of artificial intelligence technologies.
These red lines reportedly include strong opposition to the creation of AI systems designed for mass surveillance programs or for fully autonomous lethal weapons, areas that raise serious ethical and societal concerns within the global AI research community.
According to the amici, Anthropic’s decision to maintain these ethical safeguards should be viewed as an example of responsible corporate governance rather than a reason for punitive government action.
Second, the brief asserts that the government’s designation of Anthropic as a national security risk represents an improper and arbitrary use of regulatory authority. The signatories warn that such actions could discourage companies from adopting strict ethical guidelines or voicing concerns about potentially harmful applications of advanced technologies.
Finally, the researchers argue that the government’s decision could create dangerous ripple effects throughout the AI ecosystem, potentially undermining trust between private technology firms and public institutions. If companies believe they may face retaliation for raising ethical concerns, the broader industry could become less transparent and less willing to engage in responsible oversight discussions.
Additional AI Experts Sign Onto the Filing
In addition to Jeff Dean, the list of signatories includes numerous professionals across both organizations’ technical teams.
Some of the other notable contributors to the brief include:
- Grant Birkinbine, security engineer at OpenAI
- Sanjeev Dhanda, software engineer at Google
- Leo Gao, technical staff member at OpenAI
- Zach Parent, forward-deployed engineer at OpenAI
- Kathy Korevec, director of product at Google Labs
- Ian McKenzie, research engineer at Google
Their participation reflects growing concern among AI practitioners that the outcome of the case could set an important precedent affecting innovation, corporate governance, and the ethical development of artificial intelligence systems.
OpenAI CEO Sam Altman Criticizes Government Decision
The legal dispute has also drawn public commentary from OpenAI CEO Sam Altman, who has openly criticized the federal government’s decision since the controversy first emerged.
In a post published on X on February 28, Altman wrote:
“To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”
Altman’s comments reflect broader skepticism within parts of the technology industry regarding the government’s handling of the situation.
However, Altman has also acknowledged that the timing of OpenAI’s own agreement with the Pentagon—announced around the same period that Anthropic’s conflict with government officials escalated—may have created the appearance of opportunism.
According to Altman, the situation “looked opportunistic and sloppy,” even if the timing was ultimately coincidental.
Why the Anthropic Case Matters for the Future of AI
The growing support for Anthropic from engineers and researchers across competing organizations illustrates how high the stakes are for the AI industry.
At its core, the case raises fundamental questions about:
- Government oversight of artificial intelligence companies
- National security concerns related to advanced AI systems
- Corporate responsibility and ethical AI development
- The balance between regulation and technological innovation
As artificial intelligence continues to reshape industries ranging from defense and cybersecurity to healthcare and finance, the outcome of this legal battle could influence how governments interact with AI companies for years to come.
For many experts in the field, the unusual alliance of competitors supporting Anthropic sends a clear message: the dispute is not just about one company, but about the broader future of the global AI ecosystem and the rules that will govern it.