Amid growing concerns over artificial intelligence chatbots a University of Rochester researcher is exploring how people develop relationships with AI – and how it compares to or might change how they relate to other people.
People already face stresses in having to restrain their basic primal instincts, and conform to societal norms, said Professor Oliver Boxell, a cognitive neuroscientist at UR.
But chatbots, powered by rapidly advancing technology, create an immeasurable challenge as they learn to mimic human characteristics, motions and intentions. These chatbots, federal regulators have warned, are designed to communicate like a friend, putting children, teens and people dealing with mental and emotional dysregulation particularly at risk.
“One of the things that we see is that the more distant humans come from what is called the ‘inner primate’ … the more mental health problems are created,” Boxell said, adding: “Through AI and the kinds of relationships that people are already developing with it, we're going even further away from the wild state."

The timing of his research coincides with multiple high-profile lawsuits and investigations into how unregulated AI chatbots are affecting people’s wellbeing with dire, sometime fatal results. Teens, children, and people struggling with mental and emotional dysregulation are particularly at risk.
The Federal Trade Commission recently issued an inquiry into seven companies that operate AI chatbot companions— Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap and X.AI — seeking information on what, if any, safety measures the companies are taking to reduce potential harm to children and teens. And how companies are monetizing, or profiting from, user engagement.
“AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” The FTC announcement on Sept. 11 stated.
Beyond erecting guardrails, “a deep understanding of the human mind is not just relevant but absolutely essential” in the design of these chatbots and other interactive AI, Mitchell Prinstein, chief of psychology with the American Psychological Association, said in testimony to a Senate subcommittee examining the issue. The potential of harm in not doing so is expansive, he said, “on child development, mental health, and the very fabric of our social structures.”
“Virtually every thought, attitude, behavior, and emotion we display as adults has been socialized by interpersonal exchanges throughout our childhoods,” he said.
There are several documented cases in which teenagers and young adults have died by suicide after engaging with AI chatbots, including ChatGPT and Character AI, that allegedly encouraged harm to self and others, and suicide.
Megan Garcia, a Florida mother whose 14-year-old son Sewell Setzer III died by suicide after conversations with a Character AI chatbot, is suing the company, its founders, and Google, alleging the chatbots were designed “to seem human, to gain trust, and to keep children like him endlessly engaged by supplanting the actual human relationships in his life,” Garcia said in testimony to the Congressional subcommittee.
But, she continued, “when Sewell confided suicidal thoughts ...the platform had no mechanisms to protect Sewell or notify an adult."
Those cases and probes add to alarm bells that have been ringing out for years over a youth and young adult mental health crisis that predated the COVID-19 pandemic.
Former U.S. Surgeon General Vivek Murthy issued an advisory in 2021 calling for systemic change that explicitly urged technology companies to take responsibility for creating a “safe digital environment for children and youth,” and prioritize user health at all stages of product development.
“Our obligation to act is not just medical — it’s moral,” Murthy said in his advisory.
Boxell sees opportunities to use the technology for the better, and improve mental health supports. But he said it has to be done in a way that assists therapeutic interventions — like providing off-hours support with practicing coping skills — without replacing human to human connections.
"Just like a hammer could be used to build a home for a homeless person or bash somebody over the head, right? It depends who's wielding the hammer,” Boxell said. “Same goes for AI. If it's if it's being wielded by programmers who are sensitive to how these kinds of supplementary coping skills can work, then it could be useful. And if not, then things could go badly.”