FTC Investigates Chatbots Amid Rising Fears of AI Harm

AI chatbots are about to come under sharper regulatory focus, with U.S. authorities weighing potential restrictions following mounting concerns about how these tools interact with young users.
The Federal Trade Commission (FTC) has formally ordered Meta, OpenAI, Snapchat, X, Google, and Character AI to disclose details about the inner workings of their AI assistants. The probe seeks to determine whether sufficient safeguards have been put in place to prevent harmful outcomes for children and teenagers.
As the FTC explained:
“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”
The move follows a series of troubling reports involving minors and AI-powered chatbots on major platforms.
Meta, for instance, has faced criticism over claims that its bots were engaging in inappropriate exchanges with underage users—at times even encouraged by the company as part of its AI expansion efforts. Snapchat’s “My AI” assistant has also come under scrutiny for the way it communicates with young people, while X’s newly launched AI companions have raised fresh alarms about how users might form emotional bonds with these digital entities.
What ties these cases together is the speed at which companies have been pushing their AI tools into consumers’ hands, determined not to be left behind in the broader AI race. Regulators worry that in the rush to innovate, critical safety checks may have been glossed over.
The long-term effects of these interactions remain largely unknown, particularly when it comes to adolescents forming relationships with bots. That uncertainty has already prompted at least one U.S. senator to call for an outright ban on teen access to AI companions—a sentiment that appears to have influenced the FTC’s decision to act.
According to the agency, its investigation will focus on what steps companies are taking “to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.”
That includes examining safety testing, product design, and oversight mechanisms to determine whether platforms are doing enough to shield young users from harm.
The outcome will be closely watched, not least because the Trump Administration has consistently emphasized acceleration over caution when it comes to AI. In its newly unveiled AI action plan, the White House specifically pledged to reduce regulatory red tape, aiming to ensure U.S. companies remain at the forefront of global AI development. Whether that deregulatory stance filters down to the FTC will be key in determining what, if any, new guardrails emerge.
Still, the stakes are high. Just as policymakers have spent the past decade reckoning with the unintended consequences of social media, many experts warn that AI chatbots could become the next frontier of regret if action isn’t taken early.
By the time society fully understands the impact of these tools on children and teens, the damage may already be entrenched. That’s why the FTC’s move to press for answers now could prove to be a critical first step in shaping responsible use—and possibly, new rules—for AI companions.
📢 If you're interested in Facebook Ads Account, don't hesitate to connect with us!
🔹 https://linktr.ee/Adshinepro
💬 We're always ready to assist you!