
News broke late Monday night that OpenAI and the U.S. Department of Defense have formally amended their previously controversial partnership agreement, inserting more expansive and explicitly worded privacy protections into the contract. The revised language appears designed to quell escalating public backlash, industry criticism, and political scrutiny over the potential domestic surveillance implications of advanced artificial intelligence systems deployed in national security contexts.
According to reporting from Axios, the updated agreement now contains more explicit constitutional and statutory guardrails than earlier drafts. The new provisions directly reference foundational legal authorities governing government surveillance powers and civil liberties protections. Among the added language are the following clauses:
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act (FISA) of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
“For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
The specificity of these additions is notable. By explicitly naming the Fourth Amendment—which protects against unreasonable searches and seizures—as well as landmark national security legislation such as the National Security Act and FISA, the revised contract appears to move beyond broad assurances and instead anchor its restrictions in clearly defined constitutional and statutory frameworks. The phrasing “for the avoidance of doubt” further signals that both parties sought to eliminate interpretive ambiguity surrounding the permissible scope of AI-enabled data analysis.
The timing of these changes is widely interpreted as a direct response to mounting tensions between the Pentagon and OpenAI’s rival, Anthropic. A detailed New York Times report published earlier described the breakdown in negotiations between the Department of Defense and Anthropic, culminating in Anthropic being designated a “supply-chain risk” and restricted from conducting business with certain major contractors.
According to that report, Anthropic raised concerns about the potential use of AI systems to analyze unclassified but commercially sourced bulk datasets concerning Americans. These datasets, while not classified intelligence, can include granular location metadata harvested from smartphone applications, detailed web browsing histories, consumer purchasing records, and other forms of behavioral telemetry. Although legally obtainable through commercial channels, such information can—when aggregated and processed at scale—enable powerful forms of pattern recognition, geospatial tracking, and predictive behavioral modeling.
Anthropic reportedly sought what the Times described as a “legally binding promise” from the Pentagon that its models would not be used on such unclassified commercial data relating to U.S. persons. Negotiations ultimately collapsed, and the dispute escalated into a broader standoff over AI governance principles, acceptable use boundaries, and contractual enforceability.
Pentagon officials, for their part, have consistently argued that the requested limitations were unnecessary and duplicative of existing law. Defense Department spokesman Sean Parnell wrote publicly on X that “The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal).” The Pentagon’s position, as articulated by Parnell and others, is that the Department simply seeks contractual flexibility to conduct any operations that are already lawful under established statutes. From that perspective, imposing additional contractual constraints beyond existing legal prohibitions could hinder legitimate intelligence, defense logistics, cybersecurity analysis, or battlefield planning functions.
OpenAI CEO Sam Altman has indicated that he shares at least some of Anthropic’s civil liberties concerns, even while continuing to pursue defense-related partnerships. In posts published Monday night, Altman suggested that negotiations with the Pentagon involved extensive back-and-forth discussions over how explicitly domestic surveillance restrictions should be spelled out. While the Department reportedly emphasized that mass surveillance of Americans is already prohibited under constitutional law, OpenAI appears to have insisted that the contract itself codify these protections in unmistakable terms.
The resulting amendments can be interpreted as a compromise: constitutional and statutory limitations are now embedded directly into the contractual language, transforming what might otherwise be implicit legal boundaries into explicit, enforceable commitments within the agreement itself. This move may serve both reputational and risk-management purposes, providing clearer documentation of OpenAI’s stated ethical constraints while preserving the Pentagon’s ability to operate within the bounds of existing law.
Beyond the contractual technicalities, the broader geopolitical environment adds additional complexity. The agreement was signed shortly before the most recent escalation involving U.S. military action connected to Iran. Although it would be speculative to assert that OpenAI has suffered measurable financial losses as a direct result of the controversy, public reaction has been swift and, in some corners, intense.
An activist website titled “QuitGPT” has emerged, calling for a boycott of ChatGPT. The homepage prominently displays a counter—without citation—claiming that more than 1.5 million individuals have pledged to abandon the platform. The site urges users to “make an example of ChatGPT” and to “send a clear signal to ICE enablers that their actions will not go unpunished.” While the rhetoric frames the issue in terms of immigration enforcement and civil liberties, observers note that the actual contractual distinctions between OpenAI’s and Anthropic’s government engagements may be narrower and more technical than activist messaging suggests.
The political dimension of the dispute has also grown more pronounced. Former President Donald Trump reportedly described executives at Anthropic as “leftwing nut jobs,” injecting partisan commentary into what had previously been framed primarily as a policy and governance disagreement over AI safeguards. The episode illustrates how rapidly emerging AI governance debates can become entangled with broader ideological narratives.
Meanwhile, cultural reactions have added a layer of symbolic drama. Pop star Katy Perry publicly announced that she has switched to Anthropic’s Claude platform for her AI usage, a gesture that, while largely symbolic, reflects how consumer perceptions of AI providers are increasingly influenced by ethical positioning and public controversies.
Technology publication Gizmodo has contacted OpenAI seeking comment on whether the backlash has had any tangible impact on enterprise customers, subscription growth, partnership pipelines, or internal policy deliberations. As of the latest reporting, OpenAI had not issued additional statements beyond the amended contractual language.
At a deeper level, the controversy highlights a structural tension at the heart of the AI era: advanced machine learning systems are uniquely powerful tools for analyzing massive, heterogeneous datasets at unprecedented scale and speed. That analytical capability can serve legitimate national security, logistics, cybersecurity, and disaster response purposes. At the same time, the same technical capabilities can raise profound civil liberties concerns if applied to sensitive domestic data streams without robust oversight.
As AI models grow more capable and more deeply integrated into governmental and defense workflows, contract language itself is emerging as a critical battleground. The precise wording of terms such as “intentional use,” “domestic surveillance,” and “commercially acquired identifiable information” may carry significant operational implications. In this context, legal drafting becomes not merely a formality, but a frontline mechanism for shaping the permissible contours of algorithmic power.
Whether the amended language ultimately satisfies critics, reassures customers, or alters the trajectory of AI-defense collaboration remains to be seen. What is clear is that the intersection of artificial intelligence, constitutional protections, national security imperatives, and public trust will continue to generate friction—and that future contracts may become increasingly detailed, technical, and publicly scrutinized as these debates intensify.