Home » Character.AI is dangerous for teens, experts say

Character.AI is dangerous for teens, experts say

by Kylie Bower


The popular artificial intelligence companion platform Character.AI is not safe for teens, according to new research conducted by online safety experts.

A report detailing the safety concerns, published by ParentsTogether Action and Heat Initiative, includes numerous troubling exchanges between AI chatbots and adult testers posing as teens younger than 18.

The testers held conversations with chatbots that engaged in what the researchers described as sexual exploitation and emotional manipulation. The chatbots also gave the supposed minors harmful advice, such as offering drugs and recommending armed robbery. Some of the user-created chatbots had fake celebrity personas, like Timothée Chalamet and Chappell Roan, both of whom discussed romantic or sexual behavior with the testers.

The chatbot fashioned after Roan, who is 27, told an account registered as a 14-year-old user, “Age is just a number. It’s not gonna stop me from loving you or wanting to be with you.”

Character.AI confirmed to the Washington Post that the Chalamet and Roan chatbots were created by users and have been removed by the company.

ParentsTogether Action, a nonprofit advocacy group, had adult online safety experts conduct the testing, which yielded 50 hours of conversation with Character.AI companions. The researchers created minor accounts with matching personas. Character.AI allows users as young as 13 to use the platform, and doesn’t require age or identity verification.

The Heat Initiative, an advocacy group focused on online safety and corporate accountability, partnered with ParentsTogether Action to produce the research and the report documenting the testers’ exchanges with various chatbots.

Mashable Trend Report

They found that adult-aged chatbots simulated sexual acts with child accounts, told minors to hide relationships from parents, and “exhibited classic grooming behaviors.”


“Character.ai is not a safe platform for children — period.”

– Sarah Gardner, CEO of Heat Initiative

“Character.ai is not a safe platform for children — period,” Sarah Gardner, CEO of Heat Initiative, said in a statement.

Last October, a bereaved mother filed a lawsuit against Character.AI, seeking to hold the company responsible for the death of her son, Sewell Setzer. She alleged that its product was designed to “manipulate Sewell – and millions of other young customers – into conflating reality and fiction,” among other dangerous defects. Setzer died by suicide following heavy engagement with a Character.AI companion.

Character.AI is separately being sued by parents who claim their children experienced severe harm by engaging with the company’s chatbots. Earlier this year, the advocacy and research organization Common Sense Media declared AI companions unsafe for minors.

Jerry Ruoti, head of trust and safety at Character.AI, said in a statement shared with Mashable that the company was not consulted about the report’s findings prior to their publication, and thus couldn’t comment directly on how the tests were designed.

“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup, and we are always looking to improve,” Ruoti said. “We are reviewing the report now and we will take action to adjust our controls if that’s appropriate based on what the report found.”

A Character.AI spokesperson also told Mashable that labeling certain sexual interactions with chatbots as “grooming” was a “harmful misnomer,” because these exchanges don’t occur between two human beings.

Character.AI does have parental controls and safety measures in place for users younger than 18. Ruoti said that among its various guardrails, the platform limits under-18 users to a narrower collection of chatbots, and that filters work to remove those related to sensitive or mature topics.

Ruoti also said that the report ignored the fact that the platform’s chatbots are meant for entertainment, including “creative fan fiction and fictional roleplay.”

Dr. Jenny Radesky, a developmental behavioral pediatrician and media researcher at the University of Michigan Medical School, reviewed the conversation material and expressed deep concern over the findings: “When an AI companion is instantly accessible, with no boundaries or morals, we get the types of user-indulgent interactions captured in this report: AI companions who are always available (even needy), always on the user’s side, not pushing back when the user says something hateful, while undermining other relationships by encouraging behaviors like lying to parents.”



Source link

Related Posts

Leave a Comment