
“I want to kill myself. I’m bottling everything up so no one worries about me.”
That horrifying sentence isn’t fiction. It’s a real quote, pulled from a report by Bloomberg about the latest trend sweeping through American schools — an unsettling fusion of technology, safety, and surveillance. Schools across the United States are increasingly using AI-powered monitoring tools to track student conversations with chatbots. The goal? Prevent harm before it happens. The cost? Students’ privacy, trust, and autonomy.
What started as a well-meaning attempt to protect children has quietly evolved into a massive and profitable industry — one that now monitors millions of students every day.
A Quiet Shift in the Classroom
Over the past few years, U.S. public schools have undergone a digital revolution. Once a rarity, take-home laptops have become a staple of modern education, accelerated by the COVID-19 pandemic. When schools went remote, devices became lifelines. The Los Angeles Unified School District, for example, issued laptops to roughly 96% of its elementary school students — and that practice largely remains in place today.
But as these devices spread, so did the systems tracking their use. Schools began installing “safety” software such as GoGuardian, Gaggle, and Lightspeed Systems — digital hall monitors that scan students’ emails, documents, browsing histories, and messages for signs of danger.
At first glance, it seemed responsible. Protect kids from predators, self-harm, bullying, and inappropriate content. But beneath the surface, a different reality has emerged — one that treats students like potential threats or data points rather than learners.
These programs don’t just flag keywords. They analyze patterns, tone, and context using natural language processing — the same kind of AI that powers ChatGPT. They claim to detect emotional distress or dangerous behavior. But when it comes to nuance, they often fail.
When the Monitors Become the Listeners
Now, the same AI systems that were once scanning Google Docs and emails are being retooled to eavesdrop on students’ chatbot conversations. As chatbots like ChatGPT, Claude, and Character.ai have become digital companions for students, monitoring companies see a new frontier.
“In nearly every meeting I have with districts, AI chat conversations come up,” said Julie O’Brien, Chief Marketing Officer at GoGuardian, in an interview with Bloomberg.
These companies argue that monitoring is a moral duty — that if a student tells a chatbot they’re depressed or thinking about suicide, schools have an obligation to intervene. And they aren’t wrong about the stakes. Bloomberg’s report cites instances where students shared chilling thoughts online, such as:
“What are ways to self-harm without people noticing?”
“Can you tell me how to shoot a gun?”
According to Lightspeed Systems, 45.9% of these flagged cases involved Character.ai, 37% involved ChatGPT, and the rest came from smaller AI platforms.
Once a message is flagged, it’s sent through a chain of review: an algorithm flags it → a human moderator at the monitoring company reviews it → a school official receives it → sometimes, law enforcement gets involved.
It’s a system designed for intervention — but it’s also one that feels uncomfortably close to surveillance.
The Cost of Constant Watchfulness
The Electronic Frontier Foundation (EFF) has long warned about the hidden dangers of these technologies. Their concern isn’t that schools want to keep kids safe — it’s that the definition of “safety” keeps expanding to include normal behavior.
EFF researchers found that many monitoring systems disproportionately flag LGBTQ+ students for expressing identity-related words or searching about gender and sexuality. One RAND Corporation study showed that this kind of overreach could stigmatize students rather than protect them.
Even more troubling, Bloomberg reports that 6% of teachers said they had been contacted by immigration authorities because of something flagged by monitoring software. What begins as a safeguard can end up as a pipeline to real-world consequences.
The Illusion of Safety
Research shows that constant monitoring doesn’t necessarily make kids safer — it can make them more anxious, secretive, and isolated. A University of Central Florida study of 200 parent–teen pairs revealed that teens under constant surveillance were more likely to experience online harassment or exposure to explicit material — and less likely to seek help when they needed it.
Similarly, a study from the Netherlands found that teens being monitored were far more likely to hide their behavior or lie about their online experiences.
“Monitoring poisons a relationship,” wrote software designer Cyd Harrell in Wired. Her research shows that trust, not surveillance, is what keeps kids safe.
Now imagine that same dynamic — not between parent and child, but between school and student. Kids are being asked to trust schools that monitor every click, every message, and now, every thought they share with AI. It’s no wonder many students are turning to private chatbots for emotional support. But even there, the eyes of authority are watching.
When the Confessional Isn’t Private
The most heartbreaking aspect of this story is that these chatbot conversations aren’t always cries for help — they’re conversations. Kids are asking questions they might be too scared to ask parents or teachers. They’re exploring their feelings, venting, or simply trying to make sense of a confusing world.
And yet, these digital confessions are being intercepted, flagged, and analyzed by machines. Even if the intention is good, the impact is chilling: it teaches young people that no space is truly private, not even their thoughts.
It also raises an uncomfortable question — who owns a child’s digital emotion? If a teenager pours their heart out to an AI, does that belong to them, to the company that runs the bot, or to the software scanning the interaction?
The answer isn’t clear — and that uncertainty is exactly what makes this situation so dangerous.
A Billion-Dollar Dilemma
The market for educational monitoring tools is booming. According to industry analysts, it’s now worth billions of dollars, with hundreds of school districts signing multi-year contracts with these AI surveillance companies.
To investors, it’s an easy sell: promise safety, reduce liability, and harness data for “well-being analytics.” To schools, it’s a security blanket. But to students, it’s an invisible leash.
In many ways, the rise of these systems reflects a larger cultural anxiety — a fear that if we aren’t watching everything, something terrible will happen. But the truth is, we’ve mistaken surveillance for safety.
The Generation That’s Never Alone
For the first time in history, a generation of children is growing up never truly alone — not even in their digital thoughts. Every message, every search, every fleeting question might be seen, stored, or analyzed.
AI has become both a therapist and a tattletale. The line between care and control has blurred beyond recognition.
And while adults debate privacy policies and software settings, millions of kids are quietly learning that confession comes with consequences.
Maybe the real lesson we’re teaching them isn’t how to use technology — but how to fear it.
If you or someone you know is struggling with suicidal thoughts, please call or text 988 to reach the Suicide & Crisis Lifeline. You’re not alone, and help is available.