Ever since the death of her 14 year-old son, Megan Garcia has been fighting for more guardrails on generative AI.
Garcia sued Character.AI in October after her son, Sewell Setzer III, committed suicide after chatting with one of the startup's chatbots. Garcia claims he was sexually solicited and abused by the technology and blames the company and its licensor Google for his death.
"When an adult does it, the mental and emotional harm exists. When a chatbot does it, the same mental and emotional harm exists," she told Business Insider from her home in Florida. "So who's responsible for something that we've criminalized human beings doing to other human beings?"
A Character.AI spokesperson declined to comment on pending litigation. Google, which recently acqui-hired Character.AI's founding team and licenses some of the startup's technology, has said the two are separate and unrelated companies.
The explosion of AI chatbot technology has added a new source of entertainment for young digital natives. However, it has also raised potential new risks for adolescent users who may more easily be swayed by these powerful online experiences.
"If we don't really know the risks that exist for this field, we cannot really implement good protection or precautions for children," said Yaman Yu, a researcher at the University of Illinois who has studied how teens use generative AI.
Garcia said she's received outreach from multiple parents who say they discovered their children using Character.AI and getting sexually explicit messages from the startup's chatbots.
"They're not anticipating that their children are pouring out their hearts to these bots and that information is being collected and stored," Garcia said.
A month after her lawsuit, families in Texas filed their own complaint against Character.AI, alleging its chatbots abused their kids and encouraged violence against others.
Matthew Bergman, an attorney representing plaintiffs in the Garcia and Texas cases, said that making chatbots seem like real humans is part of how Character.AI increases its engagement, so it wouldn't be incentivized to reduce that effect.
He believes that unless AI companies such as Character.AI can establish that only adults are using the technology through methods like age verification, these apps should just not exist.
"They know that the appeal is anthropomorphism, and that's been science that's been known for decades," Bergman told BI. Disclaimers at the top of AI chats that remind children that the AI isn't real are just "a small Band-Aid on a gaping wound," he added.
Since the legal backlash, Character.AI has increased moderation of its chatbot content and announced new features such as parental controls, time-spent notifications, prominent disclaimers, and an upcoming under-18 product.
A Character.AI spokesperson said the company is taking technical steps toward blocking "inappropriate" outputs and inputs.
"We're working to create a space where creativity and exploration can thrive without compromising safety," the spokesperson added. "Often, when a large language model generates sensitive or inappropriate content, it does so because a user prompts it to try to elicit that kind of response."
The startup now places stricter limits on chatbot responses and offers a narrower selection of searchable Characters for under-18 users, "particularly when it comes to romantic content," the spokesperson said.
"Filters have been applied to this set in order to remove Characters with connections to crime, violence, sensitive or sexual topics," the spokesperson added. "Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts. We are continually training the large language model that powers the Characters on the platform to adhere to these policies."
Garcia said the changes Character.AI is implementing are "absolutely not enough to protect our kids."
Artem Rodichev, the former head of AI at chatbot startup Replika, said he witnessed users become "deeply connected" with their digital friends.
Given that teens are still developing psychologically, he believes they should not have access to this technology before more research is done on chatbots' impact and user safety.
"The best way for Character.AI to mitigate all these issues is just to lock out all underage users. But in this case, it's a core audience. They will lose their business if they do that," Rodichev said.
While chatbots could become a safe place for teens to explore topics that they're generally curious about, including romance and sexuality, the question is whether AI companies are capable of doing this in a healthy way.
"Is the AI introducing this knowledge in an age-appropriate way, or is it escalating explicit content and trying to build strong bonding and a relationship with teenagers so they can use the AI more?" Yu, the researcher, said.
Since her son's passing, Garcia has spent time reading research about AI and talking to legislators, including Silicon Valley Representative Ro Khanna, about increased regulation.
Garcia is in contact with ParentsSOS, a group of parents who say they have lost their children to harm caused by social media and are fighting for more tech regulation.
They're primarily pushing for the passage of the Kids Online Safety Act (KOSA), which would require social media companies to take a "duty of care" toward preventing harm and reducing addiction. Proposed in 2022, the bill passed in the Senate in July but stalled in the House.
Another Senate bill, COPPA 2.0, an updated version of the 1998 Children's Online Privacy Protection Act, would increase the age for online data collection regulation from 13 to 16.
Garcia said she supports these bills. "They are not perfect but it's a start. Right now, we have nothing, so anything is better than nothing," she added.
She anticipates that the policymaking process could take years, as standing up to tech companies can feel like going up against "Goliath."
More than six months ago, Character.AI increased the minimum age participation for its chatbots to 17 and recently implemented more moderation for under-18 users. Still, users can easily circumvent these policies by lying about their age.
Companies such as Microsoft, X, and Snap have supported KOSA. However, some LGBTQ+ and First Amendment rights advocacy groups warned the bill could censor online information about reproductive rights and similar issues.
Tech industry lobbying groups NetChoice and the Computer & Communications Industry Association sued nine states that implemented age-verification rules, alleging this threatens online free speech.
Garcia is also concerned about how data on underage users is collected and used via AI chatbots.
AI models and related services are often improved by collecting feedback from user interactions, which helps developers fine tune chatbots to make them more empathetic.
Rodichev said it's a "valid concern" about what happens with this data in the case of a hack or sale of a chatbot company.
"When people chat with these kinds of chatbots, they provide a lot of information about themselves, about their emotional state, about their interests, about their day, their life, much more information than Google or Facebook or relatives know about you," Rodichev said. "Chatbots never judge you and are 24/7 available. People kind of open up."
BI asked Character.AI about how inputs from underage users are collected, stored, or potentially used to train its large language models. In response, a spokesperson referred BI to Character.AI's privacy policy online.
According to this policy, and the startup's terms and conditions page, users grant the company the right to store the digital characters they create and they conversations they have with them. This information can be used to improve and train AI models. Content that users submit, such as text, images, videos, and other data, can be made available to third parties that Character.AI has contractual relationships with, the policies state.
The spokesperson also noted that the startup does not sell user voice or text data.
The spokesperson also said that to enforce its content policies, the chatbot will use "classifiers" to filter out sensitive content from AI model responses, with additional and more conservative classifiers for those under 18. The startup has a process for suspending teens who repeatedly violate input prompt parameters, the spokesperson added.
If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.