{"id":1310,"date":"2025-03-26T11:30:00","date_gmt":"2025-03-26T12:30:00","guid":{"rendered":"http:\/\/asian-idol.com\/?p=1310"},"modified":"2025-03-28T11:49:06","modified_gmt":"2025-03-28T11:49:06","slug":"the-rise-of-chatbot-friends","status":"publish","type":"post","link":"http:\/\/asian-idol.com\/index.php\/2025\/03\/26\/the-rise-of-chatbot-friends\/","title":{"rendered":"The rise of chatbot \u201cfriends\u201d"},"content":{"rendered":"
\n

\"An

A Wehead, an AI companion that can use ChatGPT, is seen during 2024 Consumer Electronics Show in Las Vegas. | Brendan Smialowski\/AFP via Getty Images<\/figcaption><\/figure>\n

Can you truly be friends with a chatbot? <\/p>\n

If you find yourself asking that question, it\u2019s probably too late. In a Reddit thread<\/a> a year ago, one user wrote that AI friends are \u201cwonderful and significantly better than real friends […] your AI friend would never break or betray you.\u201d But there\u2019s also the 14-year-old who died by suicide after becoming attached to a chatbot<\/a>.<\/p>\n

The fact that something <\/em>is already happening makes it even more important to have a sharper idea of what exactly<\/em> is going on when humans become entangled with these \u201csocial AI\u201d or \u201cconversational AI\u201d tools. <\/p>\n

Are these chatbot pals real relationships that sometimes go wrong (which, of course, happens with human-to-human relationships, too)? Or is anyone who feels connected to Claude<\/a> inherently deluded?<\/p>\n

To answer this, let\u2019s turn to the philosophers. Much of the research is on robots, but I’m reapplying it here to chatbots.<\/p>\n

The case against chatbot friends<\/h2>\n

The case against is more obvious, intuitive and, frankly, strong. <\/p>\n

Delusion<\/h3>\n

It\u2019s common for philosophers to define friendship by building on Aristotle\u2019s theory of true (or \u201cvirtue\u201d) friendship<\/a>, which typically requires mutuality, shared life, and equality, among other conditions.<\/p>\n

\u201cThere has to be some sort of mutuality \u2014 something going on [between] both sides of the equation,\u201d according to Sven Nyholm<\/a>, a professor of AI ethics at Ludwig Maximilian University of Munich. \u201cA computer program that is operating on statistical relations among inputs in its training data is something rather different than a friend that responds to us in certain ways because they care about us.\u201d<\/p>\n

\n

This story was first featured in the Future Perfect newsletter<\/a>.<\/h2>\n

Sign up here<\/a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.<\/p>\n<\/div>\n

The chatbot, at least until it becomes sapient<\/a>, can only simulate<\/em> caring, and so true friendship isn\u2019t possible. (For what it\u2019s worth, my editor queried ChatGPT on this and it agrees that humans cannot be friends with it.)<\/p>\n

This is key for Ruby Hornsby<\/a>, a PhD candidate at the University of Leeds studying AI friendships. It\u2019s not that AI friends aren\u2019t useful \u2014 Hornsby says they can certainly help with loneliness, and there\u2019s nothing inherently wrong if people prefer AI systems over humans \u2014 but \u201cwe want to uphold the integrity of our relationships.\u201d Fundamentally, a one-way exchange amounts to a highly interactive game. <\/p>\n

What about the very real emotions people feel toward chatbots? Still not enough, according to Hannah Kim<\/a>, a University of Arizona philosopher. She compares the situation to the \u201cparadox of fiction,<\/a>\u201d which asks how it\u2019s possible to have real emotions toward fictional characters. <\/p>\n

Relationships \u201care a very mentally involved, imaginative activity,\u201d so it\u2019s not particularly surprising to find people who become attached to fictional characters, Kim says. <\/p>\n

But if someone said that they were in a relationship<\/em> with a fictional character or chatbot? Then Kim\u2019s inclination would be to say, \u201cNo, I think you\u2019re confused about what a relationship is \u2014 what you have is a one-way imaginative engagement with an entity that might give the illusion that it is real.\u201d<\/em><\/p>\n

Bias and data privacy and manipulation issues, especially at scale<\/h3>\n

Chatbots, unlike humans, are built by companies, so the fears about bias and data privacy that haunt other technology apply here, too. Of course, humans can be biased and manipulative, but it is easier to understand a human\u2019s thinking compared to the \u201cblack box\u201d of AI<\/a>. And humans are not deployed at scale, as AI are, meaning we\u2019re more limited in our influence and potential for harm. Even the most sociopathic ex can only wreck one relationship at a time.<\/p>\n

Humans are \u201ctrained\u201d by parents, teachers, and others with varying levels of skill. Chatbots can be engineered by teams of experts intent on programming them<\/a> to be as responsive and empathetic as possible \u2014 the psychological version of scientists designing the perfect Dorito<\/a> that destroys any attempt at self-control. <\/p>\n

And these chatbots are more likely<\/a> to be used by those who are already lonely \u2014 in other words, easier prey. A recent study from OpenAI<\/a> found that using ChatGPT a lot \u201ccorrelates with increased self-reported indicators of dependence.\u201d Imagine you\u2019re depressed, so you build rapport with a chatbot, and then it starts hitting you up for Nancy Pelosi campaign donations. <\/p>\n

\u201cDeskilling\u201d<\/h3>\n

You know how some fear that porn-addled men<\/a> are no longer able to engage with real women? \u201cDeskilling\u201d is basically that worry, but with all people, for other real people.<\/p>\n

\u201cWe might prefer AI instead of human partners and neglect other humans just because AI is much more convenient,\u201d says Anastasiia Babash<\/a> of the University of Tartu. \u201cWe [might] demand other people behave like AI is behaving \u2014 we might expect them to be always here or never disagree with us. […] The more we interact with AI, the more we get used to a partner who doesn\u2019t feel emotions so we can talk or do whatever we want.\u201d<\/p>\n

In a 2019 paper<\/a>, Nyholm and philosopher Lily Eva Frank<\/a> offer suggestions to mitigate these worries.\u00a0(Their paper was about sex robots, so I’m adjusting for the chatbot context.) For one, try to make chatbots a helpful \u201ctransition\u201d or training tool for people seeking real-life friendships, not a substitute for the outside world. And make it obvious that the chatbot is not a person, perhaps by making it remind users that it\u2019s a large language model.<\/p>\n

The case for AI friends <\/h2>\n

Though most philosophers currently think friendship with AI is impossible, one of the most interesting counterarguments<\/a> comes from the philosopher John Danaher<\/a>. He starts from the same premise as many others: Aristotle. But he adds a twist.<\/p>\n

Sure, chatbot friends don\u2019t perfectly fit conditions like equality and shared life, he writes \u2014 but then again, neither do many human friends. <\/p>\n

\u201cI have very different capacities and abilities when compared to some of my closest friends: some of them have far more physical dexterity than I do, and most are more sociable and extroverted,\u201d he writes. \u201cI also rarely engage with, meet, or interact with them across the full range of their lives. […] I still think it is possible to see these friendships as virtue friendships, despite the imperfect equality and diversity.\u201d<\/p>\n

These are requirements of ideal<\/em> friendship, but if even human friendships can\u2019t live up, why should chatbots be held to that standard? (Provocatively, when it comes to \u201cmutuality,\u201d or shared interests and goodwill, Danaher argues that this is fulfilled as long as there are \u201cconsistent performances\u201d of these things, which chatbots can do.)<\/p>\n

Helen Ryland<\/a>, a philosopher at the Open University, says we can be friends with chatbots now, so long as we apply a \u201cdegrees of friendship<\/a>\u201d framework. Instead of a long list of conditions that must all be fulfilled, the crucial component is \u201cmutual goodwill,\u201d according to Ryland, and the other parts are optional. Take the example of online friendships: These are missing some elements but, as many people can attest, that doesn\u2019t mean they\u2019re not real or valuable. <\/p>\n

Such a framework applies to human friendships \u2014 there are degrees of friendship with the \u201cwork friend\u201d versus the \u201cold friend\u201d \u2014 and also to chatbot friends. As for the claim that chatbots don\u2019t show goodwill, she contends that a) that\u2019s the anti-robot bias in dystopian fiction talking, and b) most social robots are programmed to avoid harming humans. <\/p>\n

Beyond \u201cfor\u201d and \u201cagainst\u201d<\/h2>\n

\u201cWe should resist technological determinism or assuming that, inevitably, social AI is going to lead to the deterioration of human relationships,\u201d says philosopher Henry Shevlin<\/a>. He\u2019s keenly aware of the risks, but there\u2019s also so much left to consider: questions about the developmental effect of chatbots, how chatbots affect certain personality types, and what do they even replace? <\/p>\n

Even further underneath are questions about the very nature of relationships: how to define them, and what they\u2019re for. <\/p>\n

In a <\/em>New York Times article about a woman \u201cin love with ChatGPT,\u201d<\/a> sex therapist Marianne Brandon claims that relationships are \u201cjust neurotransmitters\u201d inside our brains.<\/p>\n

\u201cI have those neurotransmitters with my cat,\u201d she told the Times. \u201cSome people have them with God. It\u2019s going to be happening with a chatbot. We can say it\u2019s not a real human relationship. It\u2019s not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.\u201d<\/p>\n

This is certainly not how most philosophers see it, and they disagreed when I brought up this quote. But maybe it\u2019s time to revise old theories. <\/p>\n

People should be \u201cthinking about these \u2018relationships,\u2019 if you want to call them that, in their own terms and really getting to grips with what kind of value they provide people,\u201d says Luke Brunning<\/a>, a philosopher of relationships at the University of Leeds.<\/p>\n

To him, questions that are more interesting than \u201cwhat would Aristotle think?\u201d include: What does it mean to have a friendship that is so asymmetrical in terms of information and knowledge? What if it\u2019s time to reconsider these categories and shift away from terms like \u201cfriend, lover, colleague\u201d? Is each AI a unique entity?<\/p>\n

\u201cIf anything can turn our theories of friendship on their head, that means our theories should be challenged, or at least we can look at it in more detail,\u201d Brunning says. \u201cThe more interesting question is: are we seeing the emergence of a unique form of relationship that we have no real grasp on?\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"

A Wehead, an AI companion that can use ChatGPT, is seen during 2024 Consumer Electronics Show in Las Vegas. | Brendan Smialowski\/AFP via Getty Images Can you truly be friends with a chatbot?  If you find yourself asking that question, it\u2019s probably too late. In a Reddit thread a year ago, one user wrote that […]<\/p>\n","protected":false},"author":1,"featured_media":1312,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[15],"tags":[],"class_list":["post-1310","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-innovation"],"_links":{"self":[{"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/posts\/1310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/comments?post=1310"}],"version-history":[{"count":2,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/posts\/1310\/revisions"}],"predecessor-version":[{"id":1313,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/posts\/1310\/revisions\/1313"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/media\/1312"}],"wp:attachment":[{"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/media?parent=1310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/categories?post=1310"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/asian-idol.com\/index.php\/wp-json\/wp\/v2\/tags?post=1310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}