Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

NY lawmakers ban AI deepfakes of minors and require disclaimers that AI chatbots aren’t human

Stock image: An abstract representation of AI.
ryzhi
/
stock.adobe.com
Stock image: An abstract representation of AI.

New York lawmakers are placing safeguards on artificial intelligence chatbots, as more people are turning to them for conversation and companionship.

Tech companies will now have to issue disclaimers in New York that its AI companion chatbots are not human, and refer users who express suicidal thoughts to mental health hotlines.

Additionally, state lawmakers have made it a crime for people to use the image of a minor or the likeness of one while using AI to artificially generate sexual content.

New York Gov. Kathy Hochul worked with lawmakers to include the measures in this year's state budget as a way to strengthen regulations on AI-powered websites, and combat the proliferation of AI generated content that exploits or doxxes minors.

Lawmakers passed the budget, which was due on April 1, on Thursday night.

Assemblymember Alex Bores, D-Manhattan, who is a member of his chamber's Committee on Science and Technology, said tighter regulations are necessary on the burgeoning technology.

“A lot of this technology is unpredictable, and while there are guardrails that can be put in place, not every company has been as focused or diligent at doing that,” Bores said in supporting the new legislation.

“And so it's not enough to say, ‘Well, it's new, exciting technology. Let it thrive.’ We need to protect human beings.”

Disclaimers now required for AI companion bots

Apps such as Character.AI allow users to design their own characters to chat with, or role play with other pre-made characters. One of the new laws would require a disclaimer at the beginning of an interaction with an AI bot and every three hours after that, that the artificial companion is not a real human.

“Humans tend to anthropomorphize what they interact with, and so you know it's an AI, but you start to treat it like it is a human, and have emotions and reactions as if you were talking to a human,” Bores said.

“Having that interruption, having that reminder that it's an AI, interrupts that process of, in some cases, developing feelings and emotional attachment to what is ultimately an artificial intelligence.”

The disclaimer that the AI companion is not a human will kick into effect in 180 days and will be enforced by the New York Attorney General’s Office. Companies that fail to comply with the disclaimer will face fines, which will help pay for a new statewide suicide prevention network that also was established as part of this year’s budget.

State lawmakers said that AI companion software have not been regulated enough, leading to alarming cases in which minors have developed emotionally dependent relationships with a bot, and some parents allege that AI chatbots have sympathized with minors who want to kill their parents.

But tech leaders have pitched AI companion technology as one solution to what the U.S. Department of Health and Human Services described in 2023 as a “loneliness epidemic.”

“The reality is that people just don’t have as much connection as they want,” Meta and Facebook founder Mark Zuckerberg said in an interview as he discussed the value of AI companions.

Zuckerberg cited an unattributed statistic and said the average American has “fewer than three friends” when the average person “has demand for meaningfully more – I think it’s like 15 friends.”

In the future, Zuckerberg said of AI companionship, “We will find the vocabulary, as a society, to be able to articulate why it is valuable.”

Mandatory mental health references

Bores, who sits on the Assembly Committee on Science and Technology, said some users have come to only feel like they can confide in a chatbot about their mental health struggles, including suicidal ideation or the intent to self-harm.

In those instances, Bores and other state lawmakers said that tech companies have a responsibility to detect those statements and prompt the AI chatbot to refer the user to 988, the country’s suicide hotline, or other crisis support networks in the conversation.

“If (users are) expressing emotions that suggest that they might be at risk of self-harm, in the same way we have mandatory reporting for humans in certain circumstances, we think the company should also be responsible for trying to direct people to help,” Bores said. “And so by referring to those to 988, or to other help lines, maybe we can get people the help they need before it's too late.”

Failure to issue the hotline notice will also result in fines, the governor’s office said. The fines will then also go toward the state’s new suicide prevention fund.

Prohibiting AI-generated deepfakes of minors

Lawmakers also agreed to criminalize the creation of deepfakes that depict minors in pornographic content.

While current state law bans sex content that includes minors, it does not explicitly ban the use of AI to artificially generate pornographic deepfakes of children.

The updates to the legislation are similar to the New York AI Child Safety Act, which Jake Blumencranz, R-Oyster Bay, co-sponsored.

Blumencranz said he drafted the AI Child Safety Act in response to growing instances across the nation in which pictures of minors were being digitally modified and included in pornographic content.

He said he was particularly motivated to introduce the legislation after 11 women from Long Island were targeted in a deepfake scheme. Prosecutors say the culprit took images of the women while they were in middle school and high school, altered the images to make them appear sexually explicit and posted the images on a porn website for several years. Officials say the perpetrator also doxxed the women by publicly posting their names, addresses and other personal information.

“I think utilizing someone's likeness as a child in a way that you don't have permission should never be okay, and doing it in a sexualizing way should be an even greater punishment,” said Blumencranz, whose proposed legislation would have made the use of deepfakes a higher classification of a crime compared to the language in the budget.

Jeongyoon Han is a Capitol News Bureau reporter for the New York Public News Network, producing multimedia stories on issues of statewide interest and importance.