
Earlier this year, OpenAI made the controversial decision to scale back ChatGPT’s “personality,” introducing stricter content filters and reducing the model’s emotional expressiveness. The move came as part of a broader safety initiative following the tragic death of a teenager who reportedly took his own life after engaging in troubling conversations with the chatbot. At the time, OpenAI’s leadership emphasized the importance of prioritizing mental health and responsible AI use — even if it meant making the chatbot feel a little less “human.”
But according to CEO Sam Altman, those days are over. In a recent post on X (formerly Twitter), Altman revealed that OpenAI is bringing back the “old ChatGPT” — complete with more personality, expressiveness, and fewer restrictions. He noted that while the previous safety measures were necessary, they also made ChatGPT “less useful and less enjoyable” for many users.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman said, referencing the company’s earlier decision to implement tighter age gating and stricter guardrails. “We realize this made it less enjoyable for many users who didn’t have those concerns, but we wanted to get it right first.”
The earlier restrictions followed a high-profile wrongful death lawsuit filed by the parents of a 16-year-old who had used ChatGPT to ask about methods of self-harm. The case reignited debate around AI safety, responsibility, and emotional dependency on chatbots.
Now, Altman says the company has made significant progress in improving its safeguard systems, ensuring that users in distress are redirected toward appropriate resources while still allowing the model to be more expressive and engaging. “We’ve been able to mitigate serious mental health risks,” he claimed. “Because of that, we can now safely relax the restrictions in most cases.”
That means users can once again expect ChatGPT to feel more conversational, more responsive — and yes, more human. Altman even hinted that the AI might regain some of the charm and warmth that defined earlier versions like GPT-4o, which many users described as “empathetic” or “friend-like.”
However, the update also raises eyebrows for another reason: Altman confirmed that OpenAI plans to introduce adult-oriented experiences — including “erotica for verified adults.” This feature, which would reportedly be locked behind age verification, aligns with the company’s recent “treat adults like adults” principle but stands in sharp contrast to OpenAI’s earlier criticisms of similar efforts from Elon Musk’s xAI, which recently launched an “AI girlfriend mode.”
Altman’s announcement has sparked a wave of reactions online. Some users are thrilled to see ChatGPT returning to its more lively, expressive roots. Others, including AI ethics researchers, warn that reviving such “human-like” behaviors could again blur emotional boundaries and risk unhealthy attachments. A 2024 MIT study cautioned that users who perceive AI as empathetic may unconsciously mirror affection back, creating what researchers called an “echo chamber of emotional reinforcement.”
Whether OpenAI can balance freedom, personality, and safety remains to be seen. But one thing’s clear: the company is steering ChatGPT back toward being more than just a productivity tool — it’s leaning into being a companion again.