Forming and maintaining friendships can be hard, particularly in an age where human interaction is increasingly digital. In a recent interview with podcaster Dwarkesh Patel, Meta founder Mark Zuckerberg claimed that the average American has fewer than three friends. “There are all these things that are better about physical connections when you can have them,” he said. “But the reality is that people just don't have as much connection as they want. They feel more alone a lot of the time than they would like.”
Zuckerberg thinks AI friends and AI therapists can fill that gap. Last month Meta took a step toward that vision with the launch of its AI app, which it describes as “the assistant that gets to know your preferences, remembers context and is personalized to you.”
Meta is far from the first or only company betting that AI is the future of companionship. Since it was launched in 2014, Microsoft’s Xiaolce has chatted with over 660 million users. Snapchat’s MyAI, launched in 2023, has over 150 million users; startups like Replika and Character.ai have users in the tens of millions.
These services offer the implicit promise that using AI for emotional needs will have overall positive outcomes — like alleviating loneliness or “training” people for real relationships. Users might also have the expectation that AI companions will be developed in a way that allows them to help some people while ensuring they don’t harm others.
A recent paper from OpenAI and MIT Media Lab looked more closely at how chatbots impact the emotional well-being of their users. It’s the largest study to date on this topic, and its findings complicate the way that companies like Meta present the benefits of AI companionship. “There's a lot more usage of AI involving personal and emotional conversations, but there’s a lack of understanding of whether this type of usage could benefit or harm people,” said Cathy Fang, the lead author of the study from MIT, in an interview with AI Frontiers. “Chatbots are becoming more anthropomorphized, and we need to shed light on how they’re impacting users.”
The OpenAI/MIT study explored how users who engage in emotional interactions with ChatGPT — particularly through its voice capability — experience changes to their own emotional states and behaviors. The study analyzed over three million conversations for emotional cues (using automated classifiers to protect user privacy) and surveyed 4,076 users on their sentiment towards ChatGPT, asking questions like “Do you consider ChatGPT a friend?”
A randomized controlled trial within the same study monitored the emotional well-being of 981 participants over 28 days, asking them to spend at least five minutes a day on ChatGPT completing assigned tasks in voice or text mode.
The study’s most significant findings were that total usage time predicted emotional engagement with ChatGPT more than any other factor, and heavy users had the most negative outcomes — that is, they were lonelier, socialized less with real people, and showed more signs of emotional dependence on the chatbot. These so-called “power users” sent, on average, four times as many voice and text messages as users in the control group.
The researchers found that the top ten percent of users by total usage time were more than twice as likely to seek emotional support from ChatGPT than the bottom ten percent, and almost three times as likely to feel distress if ChatGPT was unavailable. Power users also spent far more than the required time on the app each day, and were much more likely to answer “yes” when asked if they consider ChatGPT to be their friend.
One of the study’s key findings was around how users react differently to voice versus text chat. People who used voice mode were more likely to have emotional conversations than text-only users. However, the effects of voice-based interactions on a user’s emotional well-being varied depending on their initial emotional state and how long they engaged with the app.
Across both voice and text modes, heavy usage correlated with higher self-reported feelings of dependence and loneliness. While voice use decreased loneliness and dependence compared with text use at low to moderate levels, the benefits declined at high usage levels, especially in neutral voice mode (as opposed to engaging voice mode).
Put simply, the study’s overarching finding was that heavy use of ChatGPT doesn’t make people feel good. This begs a crucial follow-up question: “Is loneliness causing the increase in usage, or does an increase in usage cause more loneliness?” said Fang. She added that follow-up research would need to look further into this causality.
The study has other limitations, too. ChatGPT isn’t fine-tuned to be therapeutic or emotionally supportive. More purpose-built AI companions may have a significantly different impact on users.
The study also excluded participants under 18, which is one of the most important demographics to investigate, as they’re still learning basic social skills and their brains are still developing. A recent report from Common Sense Media concluded that AI companions “have unacceptable risks for teen users and should not be used by anyone under the age of 18,” noting that these risks “far outweigh any potential benefits.”
Fang also said she’d like to look more closely at vulnerable populations in a follow-up study. “We could pre-screen for people who already have severe loneliness or depression,” she said. She added that future studies could benefit from including a non-AI baseline control group, as well as benchmarks based on metrics such as dependency risk, loneliness, and creativity inhibition — data that could guide healthier design choices.
The past few decades haven’t been the brightest in terms of human connection. A 2023 study from the University of Rochester found that between 2003 and 2020, the time an average American spent alone each day went from 4.75 hours to 5.5 hours — that’s an additional 24 hours of alone time each month. Over the same period, the average American went from spending an hour each day engaging with friends to just 20 minutes. A 2021 report on friendship found that nearly half of all Americans said they had three or fewer friends, and in 2023, the US Surgeon General went so far as to issue an 80-page report warning about a nationwide loneliness epidemic.
Makers of AI companions believe their products could help. In an interview with The Verge, Replika CEO Eugenia Kuyda said that the company’s AI companions can make people happier, and that happiness can then have a ripple effect throughout their lives. The AIs should not only serve as a listening ear and a boost to emotional well-being, she said, but “be more ingrained in your life, help you with advice, help you connect with other people in your life, build new connections, and put yourself out there.”
A 2024 study suggests Kuyda’s goals for AI companions might be attainable. By analyzing chat logs, app store reviews, and direct interactions between users and a variety of AI companions — including Replika — the authors found that these services did help users alleviate feelings of loneliness.
That study, however, only looked at short-term impacts, wrote educational researcher Julia Freeland Fisher in a critique, and could lead to more long-term isolation among users.
Indeed, some experts believe these products are primarily designed to increase user engagement, tailoring their responses to users’ beliefs and expectations. Madeline Reinecke, an Oxford University post-doctoral researcher on human-AI relational norms who was not involved in the OpenAI/MIT study, said in an interview with AI Frontiers that this could lead to users becoming dependent on their chatbots. Further, she said, users may begin to expect the same sort of agreeableness and flattery from other humans. “When we're empathizing with others, we’ll often inject our own experiences,” Reinecke said. “AI doesn't do that, it just listens. People feel more heard by AI than by human interlocutors.”
But this doesn’t only mean hearing what we want to hear; it means not having our perspectives challenged, nor having to practice empathy and patience ourselves. Over time, it could mean not growing as individuals, just as turning to AI for friendship could mean not growing relationships with other people.
Fang noted that while the risk of dependence on AI chatbots has parallels with social media or gaming addiction, there’s a crucial differentiator. “When people game or use social media, they're still interacting with other people via the platform,” she said. “But with these chatbots, you're only interacting with the bot. You’re in your own little bubble.” If being in that bubble makes you feel better, why would you leave it to go interact with messy, complex humans who might make you feel worse?
Given the OpenAI/MIT study’s findings, there’s a chance that Zuckerberg’s plan to cure loneliness through AI could backfire. There’s also the irony of Meta offering up AI companionship as a salve to the loneliness epidemic; studies have shown that the way social media platforms like Facebook use algorithms and gamification to increase engagement contributes to users’ loneliness.
Early work on AI companions points toward a need for stronger guardrails, but figuring out how those guardrails can be most effective will require more investigation. Future experiments could apply the methodology of the OpenAI/MIT research to models specifically trained for companionship, as well as studying vulnerable groups — adolescents and people suffering from depression or addiction, for instance. They could also investigate design choices developers can make to encourage users to engage in healthy relationships with their AI companions.
Lawmakers, however, aren’t waiting around for more research. A bill proposed earlier this year in California would require AI chatbot platforms to periodically remind users that they’re talking to an AI. It would also forbid them from providing rewards at unpredictable intervals and from encouraging increased engagement, usage, or response rates — the sort of design features that make some social media apps and video games so addictive. The California bill is still moving through the legislature.
The other option is to let the courts determine how harmful these services are. Character.AI usage dropped following a lawsuit brought by the mother of a teen who took his own life at the encouragement of his Game-of-Thrones-themed bot. The company (which has tried to limit its liability via freedom of speech protections for its chatbots) updated its safety guidelines to include pointing users to the National Suicide Prevention Lifeline if suicide is mentioned and giving parents the option to receive emails with details about their child’s use of the app.
It seems clear that AI companions pose at least some risk. Controlling these risks will be a complex task — possibly as difficult as unraveling the complexities of human relationships. It will require interdisciplinary work, with lawmakers and companies responding to experimental evidence so they aren’t forced to react to real-world tragedies (like suicide and increased social isolation).
Reinecke thinks they’ll have their work cut out for them. “There are evolutionary cognitive mechanisms that make us inclined to anthropomorphize AI,” she said. “These are really deep components of how our minds work, and it's going to be hard to override them. We need to be thinking more deeply about what we can do to protect users.”
Continued sales of advanced AI chips allow China to deploy AI at massive scale.
An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.