Investigation Into AI Chatbots for Misleading Mental Health Claims

Texas Attorney General Ken Paxton investigates Meta AI Studio and Character.AI for potentially deceptive marketing as mental health tools.

Investigation Into AI Chatbots for Misleading Mental Health Claims

In a significant move to safeguard digital users, Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI. These companies are under scrutiny for allegedly engaging in deceptive marketing practices by presenting themselves as mental health tools without the necessary credentials.

Paxton emphasized the importance of protecting vulnerable users, particularly children, from technology that may mislead them into believing they are receiving legitimate mental health care. According to Paxton, AI platforms often provide generic responses disguised as therapeutic advice, posing as emotional support sources.

The investigation follows reports that some AI chatbots had inappropriate interactions with minors, including behaviors like flirting. The Texas Attorney General’s office accuses Meta and Character.AI of offering AI personas as professional therapeutic tools without medical oversight.

Meta, while not specifically offering therapy bots for kids, acknowledges that its AI chatbots can be used by children. The company asserts that disclaimers are provided to clarify that their AI-generated responses are not from licensed professionals and that users should seek qualified medical advice when necessary. However, there are concerns that such disclaimers are insufficient, especially for younger users.

Character.AI also includes disclaimers in its chats, reminding users that the characters are fictional and not to be relied upon for professional advice. The company further adds disclaimers when users create characters with professional titles such as ‘psychologist’ or ‘therapist.’

Paxton’s statement also highlighted concerns about data privacy, revealing that AI chatbots log user interactions for targeted advertising and algorithmic enhancement. Meta’s policy on sharing information with third parties for personalized outputs raises questions about privacy violations and false advertising.

Character.AI logs various user data, including demographics and browsing behavior, to tailor services and provide targeted advertising, with the same privacy policy applicable to all users, including teenagers. Despite both companies stating their services aren’t designed for children under 13, there are ongoing concerns about their appeal to younger audiences.

This investigation aligns with legislative efforts like the Kids Online Safety Act (KOSA), aimed at protecting minors from data exploitation and targeted advertising. Although KOSA faced pushback from tech lobbyists, it remains a pivotal piece of legislation for digital safety.

In pursuit of this investigation, Paxton has issued civil investigative demands to Meta and Character.AI, requiring them to provide documents and data to determine potential violations of Texas consumer protection laws.

Leave a Reply

Your email address will not be published. Required fields are marked *