米兰体育

Skip to content
NOWCAST 米兰体育 13 4pm Newscast
Live Now
Advertisement

Senators demand information from AI chatbots following kids鈥� safety concerns, lawsuits

Senators demand information from AI chatbots following kids鈥� safety concerns, lawsuits
Generative AI has officially entered the chat and it has brought with it all sorts of new questions and complications about how it can be used and abused to make AI use safe and ethical for everyday use, lawmakers, creators and users may all have to work together before the technology outpaces us all. The Biden Harris administration has introduced *** number of guidelines around the use of AI. There's the executive order which provides standards for safety while fostering innovation. There's the blueprint for an AI Bill of Rights, which, by the way, is *** white paper, not *** binding piece of government policy. There's AI.gov, which includes job listings in the field, and the National AI advisory Committee, which is tasked with advising the president on all things AI. And that's just at the national level. Many states have proposed or even enacted legislation as well. The Biden administration materials appeared to be an optimistic and cautious approach to dealing with AI while aspiring to protect those who use it or whose data is used by it every day. The use of AI has global implications as well. The EU has passed an act outlining its approach to AI, and there have been *** few summits between countries that typically haven't found common ground. What are some of the issues these policies attempt to prevent? Some have begun to question the quality of data going into shaping AI for public use, citing embedded biases in gender and ethnicity for AI generated content, which could make using AI for non-discriminatory hiring practices *** potential challenge. To make matters worse, generative AI makes the creation of deep fakes significantly easier. Victims of deep fakes have had to advocate for themselves in *** legal arena that's still unformed. For instance, at *** high school in New Jersey, girls were targets of cyber bullying via AI created nude photos and videos. They were left without any direct recourse. Then there's the questions around copyright, creativity, academic integrity, disinformation, misinformation, and fraud, and perhaps some areas we humans haven't even yet foreseen. Even though legal action for AI related crimes could take time, legislation is in the works. *** bipartisan task force in the US House is working on ways to add guardrails to AI use, like increased civil and criminal punishments for crimes committed with AI, such as imitating someone's voice. Another potential model for the US could mimic the EU's AI Act, which hits companies with financial penalties for violating the policies. As AI becomes increasingly commonplace, legislators will need to work even faster to outline its limits for public use.
Advertisement
Senators demand information from AI chatbots following kids鈥� safety concerns, lawsuits
Two U.S. senators are demanding that artificial intelligence companies shed light on their safety practices. This comes months after several families 鈥� including a Florida mom whose 14-year-old son died by suicide 鈥� sued startup Character.AI, claiming its chatbots harmed their children.鈥淲e write to express our concerns regarding the mental health and safety risks posed to young users of character- and persona-based AI chatbot and companion apps,鈥� Senators Alex Padilla and Peter Welch, both Democrats, wrote in a letter on Wednesday. The letter 鈥� which was sent to AI firms Character Technologies, maker of Character.AI, Chai Research Corp. and Luka, Inc., maker of chatbot service Replika 鈥� requests information on safety measures and how the companies train their AI models.While more mainstream AI chatbots like ChatGPT are designed to be more general-purpose, Character.AI, Chai and Replika allow users to create custom chatbots 鈥� or interact with chatbots designed by other users 鈥� that can take on a range of personas and personality traits. Popular bots on Character.AI, for example, let users interact with replicas of fictional characters or practice foreign languages. But there are also bots that refer to themselves as mental health professionals or characters based on niche themes, including one that describes itself as 鈥渁ggressive, abusive, ex military, mafia leader.鈥漈he use of chatbots as digital companions is growing in popularity, with some users even treating them as romantic partners.But the opportunity to create personalized bots has prompted concerns from experts and parents about users, especially young people, forming potentially harmful attachments to AI characters or accessing age-inappropriate content.鈥淭his unearned trust can, and has already, led users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation鈥攃omplex themes that the AI chatbots on your products are wholly unqualified to discuss,鈥� the senators wrote in their letter, provided first to CNN. 鈥淐onversations that drift into this dangerous emotional territory pose heightened risks to vulnerable users.鈥漋ideo below: Search warrant provides new details in AI-generated child pornography investigation at Pennsylvania schoolChelsea Harrison, Character.AI鈥檚 head of communications, told CNN the company takes users鈥� safety 鈥渧ery seriously.鈥濃淲e welcome working with regulators and lawmakers, and are in contact with the offices of Senators Padilla and Welch,鈥� Harrison said in a statement.Chai and Luka did not immediately respond to requests for comment.The Florida mom who sued Character.AI in October, Megan Garcia, alleged that her son developed inappropriate relationships with chatbots on the platform that caused him to withdraw from his family. Many of his chats with the bots were sexually explicit and did not appropriately respond to his mentions of self-harm, Garcia claims.In December, two more families sued Character.AI, accusing it of providing sexual content to their children and encouraging self-harm and violence. One family involved in the lawsuit alleged that a Character.AI bot implied to a teen user that he could kill his parents for limiting his screen time.Character.AI has said it has implemented new trust and safety measures in recent months, including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide. It also says it鈥檚 developing new technology to prevent teens from seeing sensitive content. Last week, the company announced a feature that will send parents a weekly email with insights about their teen鈥檚 use of the site, including screen time and the characters their child spoke with most often.Other AI chatbot companies have also faced questions about whether relationships with AI chatbots could create unhealthy attachments for users or undermine human relationships. Replika CEO Eugenia Kuyda told The Verge last year that the app was designed to promote 鈥渓ong-term commitment, a long-term positive relationship鈥� with AI, adding that that could mean a friendship or even 鈥渕arriage鈥� with the bots.In their letter, Padilla and Welch requested information about the companies鈥� current and previous safety measures and any research on the efficacy of those measures, as well as the names of safety leadership and well-being practices in place for safety teams. They also asked the firms to describe the data used to train their AI models and how it 鈥渋nfluences the likelihood of users encountering age-inappropriate or other sensitive themes.鈥濃淚t is critical to understand how these models are trained to respond to conversations about mental health,鈥� the senators wrote, adding that 鈥減olicymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks.鈥�

Two U.S. senators are demanding that artificial intelligence companies shed light on their safety practices. This comes months after several families 鈥� including a Florida mom whose 14-year-old son died by suicide 鈥� sued startup Character.AI, claiming its chatbots harmed their children.

鈥淲e write to express our concerns regarding the mental health and safety risks posed to young users of character- and persona-based AI chatbot and companion apps,鈥� Senators Alex Padilla and Peter Welch, both Democrats, wrote in a letter on Wednesday. The letter 鈥� which was sent to AI firms Character Technologies, maker of Character.AI, Chai Research Corp. and Luka, Inc., maker of chatbot service Replika 鈥� requests information on safety measures and how the companies train their AI models.

Advertisement

While more mainstream AI chatbots like ChatGPT are designed to be more general-purpose, Character.AI, Chai and Replika allow users to create custom chatbots 鈥� or interact with chatbots designed by other users 鈥� that can take on a range of personas and personality traits. Popular bots on Character.AI, for example, let users interact with replicas of fictional characters or practice foreign languages. But there are also bots that refer to themselves as mental health professionals or characters based on niche themes, including one that describes itself as 鈥渁ggressive, abusive, ex military, mafia leader.鈥�

The use of chatbots as digital companions is growing in popularity, with some users even treating them as romantic partners.

But the opportunity to create personalized bots has prompted concerns from experts and parents about users, especially young people, forming potentially harmful attachments to AI characters or accessing age-inappropriate content.

鈥淭his unearned trust can, and has already, led users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation鈥攃omplex themes that the AI chatbots on your products are wholly unqualified to discuss,鈥� the senators wrote in their letter, provided first to CNN. 鈥淐onversations that drift into this dangerous emotional territory pose heightened risks to vulnerable users.鈥�

Video below: Search warrant provides new details in AI-generated child pornography investigation at Pennsylvania school

Chelsea Harrison, Character.AI鈥檚 head of communications, told CNN the company takes users鈥� safety 鈥渧ery seriously.鈥�

鈥淲e welcome working with regulators and lawmakers, and are in contact with the offices of Senators Padilla and Welch,鈥� Harrison said in a statement.

Chai and Luka did not immediately respond to requests for comment.

The Florida mom who sued Character.AI in October, Megan Garcia, alleged that her son developed inappropriate relationships with chatbots on the platform that caused him to withdraw from his family. Many of his chats with the bots were sexually explicit and did not appropriately respond to his mentions of self-harm, Garcia claims.

In December, two more families sued Character.AI, accusing it of providing sexual content to their children and encouraging self-harm and violence. One family involved in the lawsuit alleged that a Character.AI bot implied to a teen user that he could kill his parents for limiting his screen time.

Character.AI has said it has implemented new trust and safety measures in recent months, including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide. It also says it鈥檚 developing new technology to prevent teens from seeing sensitive content. Last week, the company announced a feature that will send parents a weekly email with insights about their teen鈥檚 use of the site, including screen time and the characters their child spoke with most often.

Other AI chatbot companies have also faced questions about whether relationships with AI chatbots could create unhealthy attachments for users or undermine human relationships. Replika CEO Eugenia Kuyda told last year that the app was designed to promote 鈥渓ong-term commitment, a long-term positive relationship鈥� with AI, adding that that could mean a friendship or even 鈥渕arriage鈥� with the bots.

In their letter, Padilla and Welch requested information about the companies鈥� current and previous safety measures and any research on the efficacy of those measures, as well as the names of safety leadership and well-being practices in place for safety teams. They also asked the firms to describe the data used to train their AI models and how it 鈥渋nfluences the likelihood of users encountering age-inappropriate or other sensitive themes.鈥�

鈥淚t is critical to understand how these models are trained to respond to conversations about mental health,鈥� the senators wrote, adding that 鈥減olicymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks.鈥�