
If you think of something to say, form it in your mind, and let it come out naturally, the assumption seems obvious: that utterance—however clumsy, however heartfelt, however strangely phrased—could never be “AI slop.” We instinctively trust that anything produced by a living mind, filtered through decades of personal experience and emotional nuance, must be authentically human. In theory, every organic, spontaneously generated piece of speech or text should be immune to accusations of machine-made blandness. But the linguistic reality we inhabit today has changed dramatically. Our shared cultural space is now so heavily infused with AI-generated wording, sentence patterns, and emotional rhythms that many of us have begun to echo the machines—often without noticing. In some cases, the influence is so pronounced that even elected officials, who traditionally rely on carefully crafted rhetorical styles, are delivering speeches that sound like they were assembled by a polite, overcaffeinated chatbot.
This creeping phenomenon has been studied formally. In July, researchers at the Max Planck Institute for Human Development’s Center for Adaptive Rationality published a paper titled “Empirical evidence of Large Language Model’s influence on human spoken communication.” Their research, highlighted by Gizmodo, drew on large sets of YouTube transcripts and comments to track vocabulary shifts following ChatGPT’s release. They discovered that words such as “underscore,” “comprehend,” “bolster,” “boast,” “swift,” “inquiry,” and “meticulous” had begun appearing at noticeably higher rates in everyday speech. These are not just random words—they are hallmark terms that chatbots frequently deploy because they appear in polished, written English. Although the study stopped short of providing a definitive causal relationship, the results revealed something telling: even passive exposure to AI-generated language appears to shape how we speak, subtly nudging our vocabulary toward a machine-like register.
But that was only the beginning. Two more recent, anecdotal accounts paint a picture that goes beyond statistical trends. They suggest that this “chatbot dialect” has moved from the realm of academia into the lived experience of everyday internet users. The blending of human and AI speech isn’t just happening in linguistics labs; it’s playing out in online communities, public discourse, and even consumer-facing spaces.
A Wired article by Kat Tenbarge dives into this trend through the perspective of Reddit moderators—individuals who spend hours sorting through user-generated content and who have become reluctant experts in spotting suspicious writing. Moderators of subreddits like r/AmItheAsshole, r/AmIOverreacting, and r/AmITheDevil have long dealt with trolls, bots, and emotionally manipulative posts. But now they face a new problem: AI-generated scenarios masquerading as real human conflict. These subreddits depend on authenticity. Their draw lies in the chance to witness genuine human flaws, raw emotion, and embarrassing moments of miscommunication. Readers crave that messy, unpredictable humanity. When a seemingly dramatic family dispute or a tearful confession turns out to be AI-generated fiction, the emotional payoff disappears. The entire premise of the community collapses into artificiality.
Moderators described their detection process to Wired, but their tools are far from scientific. They rely on intuition—on noticing that a story feels too symmetrical, too polite, too smooth, or oddly detached. But “vibes” are a fragile barrier against machine-generated text that now mimics emotional tone far better than earlier models ever could. And then came the most troubling revelation: the problem isn’t just that AI is infiltrating human spaces—it’s that humans themselves have begun writing like AI. Ordinary people, influenced by countless micro-interactions with chatbot phrasing, are unconsciously adopting the same rhythms: the neatly balanced paragraphs, the overly explicit emotional labels, the strangely earnest conflict summaries, the sanitized tone that tries too hard to sound reasonable.
This convergence has made moderator jobs nearly impossible. They are fighting not one invader but two: artificial text disguised as human, and human text unintentionally disguised as artificial. As one moderator, known only as “Cassie,” told Wired, “AI is trained off people, and people copy what they see other people doing.” This feedback loop—humans shaping AI, and AI shaping humans—creates a linguistic echo chamber. “People become more like AI,” she said, “and AI becomes more like people.” In that echo, the distinction between the two begins to blur until it’s nearly meaningless.
Essayist Sam Kriss explored a similar dynamic in a recent New York Times Magazine piece, where he dissected the peculiar tics of chatbot writing. Modern AI models, he notes, have developed their own stylistic fingerprints: an overuse of certain intellectual verbs, a fondness for soft transitions like “moreover” and “in essence,” and an oddly earnest emotional neutrality. Some of these tics come from statistical quirks in the training data—such as the model’s inclination to overuse the word “delve,” which appears frequently in English-language writing from certain regions, especially Nigeria. Kriss observed that humans, exposed to these patterns through frequent AI interactions, have begun incorporating them into their own writing without conscious intent.
Kriss then revisited a controversy from the summer involving the U.K. Parliament. Multiple MPs were accused of using ChatGPT to craft their speeches after observers noticed an American phrase—“I rise to speak”—appearing with unusual frequency. But Kriss demonstrates that the phenomenon is far more widespread than isolated accusations suggest. “On a single day this June,” he writes, the phrase appeared 26 times. While it’s theoretically possible that dozens of MPs all independently turned to ChatGPT, the more likely explanation is that linguistic habits seeded by chatbots have spread through speechwriting culture like spores. AI is, in effect, carrying rhetorical styles across borders, introducing American political idioms into British legislative speech without any deliberate decision on the part of humans. Kriss describes this as chatbots “smuggling cultural practices into places they don’t belong,” and the metaphor feels uncomfortably apt.
Even corporate communication has become a casualty of this new aesthetic drift. When Starbucks temporarily closed certain locations in September, the printed signs on many store doors contained oddly affected, sentimental phrasing: “It’s your coffeehouse, a place woven into your daily rhythm, where memories were made, and where meaningful connections with our partners grew over the years.” The text exudes the overwrought, slightly saccharine voice that has become characteristic of AI-generated corporate empathy. While it’s impossible to confirm whether these messages were actually written by a chatbot, the tone itself is unmistakable. It is a style of prose that simply did not exist broadly before 2022—a blend of corporate branding, synthetic warmth, and vaguely poetic phrasing that feels engineered to tug at emotions without ever sounding truly emotional.
And this is the most profound and unsettling shift: that stylistic creep has begun influencing not only official communication but also personal expression. It is leaking into how people write emails, craft apologies, tell stories, and even describe their own feelings. Younger users on the internet report that they now hesitate before using words like “delve,” “furthermore,” or “meticulous,” because they fear sounding like AI. Others say they consciously avoid certain sentence structures that have become iconic of chatbot style. Yet despite these efforts, traces seep through. Exposure is constant; influence is unavoidable.
We are witnessing the early stages of a linguistic merger. The boundary between human and machine communication is dissolving—not through deception or technological failure, but through natural imitation, cultural osmosis, and sheer ubiquity. AI no longer merely mirrors the way people speak. Increasingly, people are mirroring the way AI speaks. And whether we find that amusing, unsettling, or simply inevitable, it represents a profound transformation in the evolution of language—one that is still accelerating, and one that we may not fully understand until long after the shift has taken hold.