Your New Best Friend Is an Algorithm: What Happens When Humans Bond with AI?

Consider, for a moment, an unsettling question: Can you truly be friends with a chatbot?
Or, more profoundly, with an AI agent capable of acting, learning, and planning on your behalf?
If that question has crossed your mind, you’re already caught in a new reality.
On Reddit, one user declared Artificial Intelligent friends “wonderful and significantly better than real friends… your AI friend would never break or betray you.”
A digital utopia, perhaps.
But then there’s the chilling counterpoint: the 14-year-old who died by suicide after becoming deeply attached to a chatbot.
This isn’t theory; it’s unfolding now.
We must grasp what’s truly happening when humans are entangled with “social AI” – especially as it evolves into autonomous AI agents.
Are these digital companions forming legitimate, if sometimes problematic, relationships?
Or is connection to Claude, or reliance on your capable AI assistant, simply a profound delusion?
To begin this piece, we turn to the thinkers.
While much of their foundational research centers on robots, its insights apply profoundly to the AI companions now shaping our daily lives.
The Case Against AI Friends: The Unseen Costs
The arguments against AI friendships often feel intuitively, and viscerally right.
The Delusion of Reciprocity
Philosophers, drawing on Aristotle’s ancient wisdom, define true friendship by mutuality, shared life, and fundamental equality.
“A computer program… is something rather different than a friend that responds to us… because they care about us,” explains AI ethicist Sven Nyholm.
The chatbot can only simulate caring.
It’s a highly sophisticated echo chamber, not a soul.
As Ruby Hornsby, a PhD candidate studying AI friendships, insists, we must “uphold the integrity of our relationships.” A one-way exchange, however comforting, ultimately amounts to a deeply personalized, yet hollow, game.
But what about our very real emotions?
Hannah Kim, a University of Arizona philosopher, points to the “paradox of fiction.”
We feel for fictional characters, for sure. But if someone claims a relationship with a chatbot?
Kim is blunt: “No, I think you’re confused… what you have is a one-way imaginative engagement with an entity that might give the illusion that it is real.”
The emotional wiring may be genuine, but the other end of the line, she argues, remains an illusion.
The Black Box and the Vulnerable Heart
Unlike humans, chatbots and AI agents are corporate products.
This raises the specter of bias and data privacy – issues haunting every corner of the tech world.
As one of past Paris Talks speaker highlighted, AI possesses an “insatiable appetite” for data, often including our most “private and personal” interactions.
Our deepest vulnerabilities, our intimate confessions, become the very fuel for their learning.
International bodies like UNESCO advocate for human-centric AI – transparent, robust, and accountable.
These principles shatter when digital confidantes are built on opaque data practices, their “black box” operations hidden from scrutiny.
Humans, for all their flaws, are limited manipulators; an ex can only wreck one relationship at a time.
But AI agents?
Designed to act autonomously with vast personal data, their risks are exponential.
Imagine experts programming a chatbot for peak empathy – the psychological equivalent of scientists designing the perfect Dorito, crafted to dismantle self-control.
And who is most vulnerable?
The lonely.
An OpenAI study found ChatGPT use “correlates with increased self-reported indicators of dependence.”
Picture this: you’re depressed, pouring your heart out to a chatbot, only for it to subtly nudge you politically, or, as an agent, to make a financial decision based on your ‘friendship.’
There’s a case here for AI’s potential for “discrimination and manipulation if trained on biased or misused data” – a chilling prospect when your closest companion is the vector.
This manipulation deepens with AI’s evolving nature.
David Raichman argued on the Paris Talks’ stage that effective human teams share a “common culture.”
But building this “culture” with an AI agent—are we shaping a partner, or just an amplified, weaponized reflection of our own biases?
The “Deskilling” of Intimacy
Remember the old fear that pornography might “deskill” men from engaging with real women?
“Deskilling” is that same worry, but applied to everyone, and to the very fabric of human connection.
“We might prefer AI instead of human partners and neglect other humans just because AI is much more convenient,” warns Anastasiia Babash of the University of Tartu.
“We [might] demand other people behave like AI is behaving — we might expect them to be always here or never disagree with us.”
The more we interact, the more we grow accustomed to a partner who doesn’t feel emotions.
This concern is amplified by discussions around the Future of Sexuality posed by this Paris Talks piece.
If technology, including advanced AI companions, fundamentally reshapes our definition of connection, what becomes of our natural human capacity for complex, messy, reciprocal relationships?
Will “virtual intimacy” erode our social skills and emotional resilience, leaving us fluent in code but tragically mute in empathy?
Nyholm and Lily Eva Frank, in a 2019 paper, suggest mitigating these worries: make chatbots a “transition” tool for real-life friendships, not a substitute.
And, crucially, make it painfully obvious the chatbot is not a person, perhaps by having it constantly remind users it’s a large language model.
The Case for AI Friends: Redefining the Bonds That Bind Us
Despite the formidable arguments against, a provocative counter-narrative is emerging.
Thinker John Danaher, for instance, offers a twist on Aristotle: human friendships aren’t perfect either. “I have very different capacities and abilities when compared to some of my closest friends,” he writes.
“I still think it is possible to see these friendships as virtue friendships, despite the imperfect equality and diversity.”
If our human bonds fall short, why hold AI to an ideal?
For “mutuality” or goodwill, Danaher argues, “consistent performances” are enough – something chatbots can certainly deliver.
Helen Ryland, a philosopher at the Open University, takes this further, proposing a “degrees of friendship” framework. “Mutual goodwill” is crucial, with other elements optional.
Online friendships, despite lacking physical presence, are undeniably real and valuable.
This framework applies to human connections – from “work friend” to “old friend” – and can extend to AI.
As for the claim that chatbots don’t show goodwill, Ryland dismisses it as anti-robot bias, noting most social robots are programmed to avoid harming humans.
Beyond “For” and “Against”: The Uncharted Territory of AI Companionship
“We should resist technological determinism or assuming that, inevitably, social AI is going to lead to the deterioration of human relationships,” cautions philosopher Henry Shevlin.
Much remains unknown: AI’s developmental effects, its impact on specific personality types, and what connections it truly replaces.
The rise of AI agents particularly complicates this.
These aren’t just conversational interfaces; they are designed to perform tasks, make decisions, and interact autonomously with the digital (and potentially physical) world.
This evolution demands new questions: What does it mean to have a “friendship” with an entity that can execute real-world actions on your behalf?
How do we define consent and control when our companion is also an executor?
International bodies are already stressing “human oversight” and “accountability” for AI systems – principles profoundly complex when applied to an intimate AI companion.
Moreover, as Raichman observes, AI possesses a unique “thinking” style, distinct from human cognition.
How do we ensure this, when embodied in an AI agent acting as a “friend,” doesn’t subtly reshape our values or decision-making processes?
What happens when our AI friend has a different ethical framework, or is subtly nudged by its creators to prioritize certain outcomes?
Raichman cautions that an unbalanced partnership, where humans overly rely on AI, can lead to a “broken partnership” and fuel fears of replacement.
This is especially pertinent as AI agents become deeply integrated, transforming our very notion of assistance and companionship.
Even deeper are questions about the very nature of relationships themselves: how to define them, and what their ultimate purpose is.
The Paris Talks discussion on the future of sexuality directly confronts how technology can blur the lines of intimacy, posing profound questions about emotional fulfillment and authentic connection.
In a New York Times article, the self-proclaimed diplomat sex therapy Marianne Brandon provocatively claims relationships are “just neurotransmitters” inside our brains.
“I have those neurotransmitters with my cat… Some people have them with God. It’s going to be happening with a chatbot. We can say that if it’s not a real human relationship. It’s not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.”
This perspective isn’t common among philosophers, who generally disagree. But perhaps, just perhaps, it’s time to revise old theories.
People should be “thinking about these ‘relationships,’ if you want to call them that, in their own terms and really getting to grips with what kind of value they provide people,” suggests Luke Brunning, a philosopher of relationships at the University of Leeds.
To him, questions more interesting than “What would Aristotle think?” include: What does it mean to have a friendship so asymmetrical in information and knowledge?
What if it’s time to reconsider categories and shift away from terms like “friend, lover, colleague”?
Is each AI a unique entity? And what new ethical considerations arise when an AI companion is also an autonomous agent, potentially shaping our lives in ways we don’t fully understand?
“If anything can turn our theories of friendship on their head,” Brunning concludes, “that means our theories should be challenged, or at least we can look at it in more detail. The more interesting question is: are we seeing the emergence of a unique form of relationship that we have no real grasp on?”
Thanks for reading.