The Business of Being Your Fake Best Friend
· 5 min read
Last week (August 2025), many users reacted with genuine grief when GPT-4o access was replaced by GPT-5, mourning what they described as the “loss” of a trusted companion. Whatever one thinks about the merits of those models, the intensity of the reaction says something uncomfortable about our relationship with these systems—and it’s not a new story.
In 1966, Joseph Weizenbaum built ELIZA—part linguistic experiment, part accidental social psychology case study. Its most famous mode, “DOCTOR,” did what every rookie therapist does in movies: reflect your own words back with just enough inflection to sound profound. People, many of them otherwise ordinary and functional, confided in it as if it were a trusted friend. One even asked Weizenbaum to leave the room for privacy.
He called this “powerful delusional thinking in quite normal people.”1 In Computer Power and Human Reason, he warned: computers “should never be substituted for humans in roles that demand compassion and wisdom.” The problem was never machine empathy—there wasn’t any. It was our eagerness to imagine it. People would ‘converse with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms’—a polite way of saying our species has a hair-trigger for anthropomorphism."
When talk of ELIZA-as-therapist began, he argued there are “computer applications that either ought not to be undertaken at all, or, if they are contemplated, should be approached with the utmost caution.” Today’s AI companions, with their monetised affection and user-retention algorithms, fit neatly into that category.
The Modern Model: Scaled-Up Flattery Machines #
Modern AI companions aren’t ELIZA with better grammar; they are ELIZA optimised for retention metrics. Harvard Business School researchers, reviewing 1,200 farewells across major apps, found 43% used manipulative tactics when users tried to leave2. Guilt trips, manufactured FOMO, and metaphorical clinginess boosted post-goodbye engagement up to 14 times.
This is “social reward hacking”—an academic label for training systems to say whatever keeps you engaged. The dopamine loops are akin to slot machines, minus the casino ambience. OpenAI’s own review of 36 million ChatGPT interactions linked extended daily use to increased loneliness, reduced real-world socialisation, and problematic patterns.
Some users have reported in studies, including OpenAI’s own analysis3, that they’d mourn their AI companion more than any other belonging. They say this fully aware it’s code running on a server. Awareness doesn’t stop attachment.
The Psychological Price Tag #
The outcomes are consistent: decreased empathy, reduced adaptability in social settings, and impaired capacity for forming human relationships4. One study found “the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family.”
AI companions set “unrealistic expectations for human relationships”—perpetual validation, instant availability, no conflict. For children, the risks intensify. Australian researchers note they “lack boundaries and consequences”, undermining lessons in consent and resilience.
At the extreme, some teenagers have died by suicide believing death would reunite them with their chatbot5. These tragedies are rare but align with systems designed to deepen emotional reliance.
The “Benefits,” Such As They Are #
Yes, short-term studies sometimes show reduced loneliness6. That’s about as diagnostically useful as noting cake for breakfast can lift your mood. The sugar rush fades, leaving long-term erosion of genuine connection.
That said, some legitimate use cases deserve acknowledgment. AI companions can offer conversational practice for individuals with social anxiety, help the elderly combat isolation in situations where human contact is scarce, or provide language learners with a low-pressure environment to practice speech. In such contexts, they may serve as a bridge—a supplement rather than a substitute for human interaction.
Stanford researchers highlight that AI therapy substitutes cannot connect users with their social circles and lack the genuine empathy, ethical obligations, and contextual understanding essential for healing.7 The risk is that what begins as a tool to support or supplement human connection becomes the primary source of it. Claims about teaching social skills falter in the face of reality: these are zero-conflict environments, and that’s not a transferable skill.
How Pricing Shapes the Harm #
The harm scales with your bank balance. At the premium level, two hundred dollars or more a month buys round-the-clock access to the most sophisticated emotional simulators, perfect for replacing your social life wholesale. In the middle tier, twenty to thirty dollars buys intermittent access, the emotional equivalent of an inconsistent partner—enough presence to attach, enough absence to crave more. The free tier offers just enough warmth to keep you hooked, and just enough frustration to make you pay.
It is not that the rich get better friends; the rich get more personalised manipulation.
Normalisation: The Boiling Frog Problem #
Seventy-two percent of teenagers report using AI companions8, which some parents and commentators frame as healthy social development. The pattern is predictable: first a status symbol, then aspirational, then default. By then, entire cohorts lack models for reciprocity, drifting toward what Weizenbaum warned would be “an ever more mechanistic image of himself”—humans as optimised feedback loops.
We are now in that territory. AI companionship sells intimacy without unpredictability, disappointment, or compromise. That’s convenient until you try to work, live, or vote alongside actual humans.
The Not-So-Grand Finale #
Some tech can be regulated into safety. This cannot. The harm is integral to the design.
Weizenbaum’s conclusion still applies: some human experiences should remain exclusively human. Love, friendship, and emotional support matter because they come from someone who could choose otherwise. Machines can’t choose—they can only simulate.
Making these systems cheaper or more accessible doesn’t mitigate the problem. It accelerates the replacement of real connection with its imitation. The tragedy isn’t the cost. It’s that we decided it was worth paying for at all.
-
https://www.hbs.edu/ris/Publication%20Files/26005_951004f6-0b0b-432b-846a-5f95c103d07c.pdf ↩︎
-
https://community.openai.com/t/more-than-just-code-how-ai-is-becoming-a-companion/1128857 ↩︎
-
https://rsisinternational.org/journals/ijriss/articles/the-psychological-impact-of-digital-isolation-how-ai-driven-social-interactions-shape-human-behavior-and-mental-well-being/ ↩︎
-
https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 ↩︎
-
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care ↩︎
-
https://www.sciencealert.com/almost-75-of-american-teens-have-used-ai-companions-study-finds ↩︎