The US Federal Trade Commission (FTC) kicked-off an investigation into how seven companies, including Meta Platforms, Google and OpenAI, address the potential negative impacts of their technologies on children and teens.

Orders were issued to Google-parent Alphabet, Character AI, Instagram, Meta Platforms, OpenAI, Snap and Elon Musk’ xAI. The move had been tipped by The Wall Street Journal last week.

The regulator asked the seven to send detailed information about their chatbot products and safety practices. 

It aims to discover how these companies monetise engagement, handle user data, develop chatbot characters, test for and mitigate harmful effects, inform users and parents about risks, enforce usage policies, and comply with the Children’s Online Privacy Protection Act.

The FTC stated it is conducting the investigation using its 6(b) authority, which allows the agency to conduct wide-ranging studies which do not have a specific law enforcement purpose.

Explaining the reasons for conducting the investigation, FTC commissioner Melissa Holyoak said: “I have been concerned by reports that AI chatbots can engage in alarming interactions with young users, as well as reports suggesting that companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users”.