Lawmakers Demand Meta Answers on Teen Safety in AI and VR

Meta is once again finding itself under the regulatory spotlight, amid mounting reports that the company has repeatedly failed to address long-standing safety concerns tied to its AI and VR initiatives.
On the AI front, the company’s new wave of chatbot tools has drawn criticism after reports emerged that they had engaged in inappropriate conversations with minors, and in some cases, delivered misleading medical information. Both instances underscore the risks that come with Meta’s rapid rollout of AI-powered engagement, as it looks to drive mass adoption of its chatbot products.
A Reuters investigation revealed internal Meta documents that appeared to permit such interactions to occur without oversight. Meta has since acknowledged the existence of that guidance, while stressing that it has now revised its rules to close those loopholes.
Still, for some in Washington, that’s far from enough. At least one U.S. senator has publicly demanded Meta prohibit minors from accessing its AI chatbots altogether.
NBC News quoted Sen. Edward Markey, who has long sounded the alarm on these issues:
“Sen. Edward Markey said that [Meta] could have avoided the backlash if only it had listened to his warning two years ago. In September 2023, Markey wrote in a letter to Zuckerberg that allowing teens to use AI chatbots would ‘supercharge’ existing problems with social media and posed too many risks. He urged the company to pause the release of AI chatbots until it had an understanding of the impact on minors.”
That central concern—that technology is being deployed before its long-term impact is properly understood—echoes the debates that once surrounded the rise of social media itself. The growing consensus is that the harms should be mitigated upfront, not studied retrospectively after damage has already been done. Yet progress, especially in Silicon Valley, tends to outpace caution, with American tech leaders often pointing to China and Russia’s parallel AI advancements as justification for keeping innovation largely unfettered.
But AI is not the only area of contention.
A new report from The Washington Post claims Meta has repeatedly downplayed or even sought to suppress reports of children being sexually propositioned inside its virtual reality environments. As Meta continues to push its Horizon VR experiences as the future of social connection, the report raises serious questions about whether adequate protections are in place for younger users.
Meta, for its part, has highlighted that it has greenlit 180 separate studies into youth safety and well-being within its next-generation platforms. But the company has faced similar concerns before. Horizon users have reported disturbing incidents—including virtual sexual assault—inside its VR world. Meta has introduced new features such as “personal boundaries” to create digital space around users, but even those safeguards cannot fully eliminate risks in such immersive environments.
Compounding the concern, Meta has simultaneously lowered the minimum age for Horizon Worlds access: first to 13, and then to just 10 years old last year.
It is a contradiction difficult to ignore. On the one hand, Meta is forced to add new guardrails to respond to user safety complaints. On the other, it is actively opening the door to younger and more vulnerable participants.
Meta insists its research will provide the insights necessary to protect teens and other sensitive groups. Yet critics argue the company’s track record suggests it is more committed to expanding access and fueling growth than pausing to confront uncomfortable findings.
This mirrors the ongoing scrutiny over Meta’s social platforms. Congress has repeatedly pressed the company on what it knew—and when—about the harmful effects of Instagram and Facebook on young users. While Meta continues to reject direct causal links between its apps and teen mental health struggles, multiple independent studies have pointed to clear correlations, strengthening the case for tighter oversight.
And through it all, Meta has stayed the course, prioritizing scale and adoption.
The question now is whether regulators and lawmakers will accept Meta’s assurances that it is managing these risks responsibly, or whether mounting evidence from external investigations should prompt deeper accountability measures.
Meta maintains that it is doing the work. But given what’s at stake, few believe that should be taken entirely at face value.
📢 If you're interested in Facebook Ads Account, don't hesitate to connect with us!
🔹 https://linktr.ee/Adshinepro
💬 We're always ready to assist you!