Over 70% of teenagers have now used AI companions, and yet we have no proof that they are safe for kids’ mental health. Adults may approach AI with skepticism, but minors lack the same discernment, leaving them uniquely vulnerable to persuasive, untested AI chatbots. A growing number of tragedies across the country are painting a disturbing picture of AI chatbots’ effects on teens.
In April, 16-year-old Adam Raine took his own life with help fromChatGPT, according to his parents’ lawsuit against OpenAI. Just a few months later, a 14-year-old girl attempted suicide after her parents tried to restrict her use of an AI companion. These are not isolated incidents. They represent the tip of an iceberg of an alarming trend: AI chatbot relationships are harming vulnerable children.
Despite mounting concerns from lawmakers and everyday Americans, top AI companies have repeatedly compromised child safety. A recent leak revealed that Meta’s official AI guidelines allowed chatbots to have ‘sensual or romantic conversations’ with minors. Meanwhile, Google recently acquired Character.AI, a company criticized for hosting predatory chatbot interactions with underage users.
In a long-overdue response to its recent scandals, OpenAI finally proposed introducing age verification for ChatGPT. For many families, however, the damage has already been done – and Big Tech’s safety promises are just words. Google, for instance, recently broke its own AI safety commitments, demonstrating that AI companies won’t police themselves without legal consequences. It’s clear that the only way to keep America’s future generations safe is for our elected leaders to step in and hold Big Tech accountable.
Big Tech companies will likely continue to lobby Congress against meaningful regulation, pushing sweeping measures like the SANDBOX Act to shield themselves from accountability. This proposed bill, backed by industry giants, would let AI companies get around safety laws under the guise of “innovation.” Now is not the time to shield companies from accountability with all of the emerging harms coming from AI.
The dangerous pattern of tech companies trying to skirt regulation mirrors their playbook with social media. For years, tech companies opposed social media regulation and denied its harms to mental health despite studies to the contrary. We can’t repeat the same mistakes we made with social media with AI and let tech companies continue to test experimental technology on our kids before we know it’s safe.
Even if tech companies address the worst harms from AI chatbots on their own, a more fundamental issue will still remain: AI companions’ long-term effects on kids. In the midst of a loneliness epidemic, AI companions may provide a band-aid solution for teens experiencing mental health issues. But the American Psychiatric Association has urgently warned that chatbots posing as therapists can dispense dangerous mental health advice and lack human oversight. Since chatbots are always available and validate their users no matter what, AI companions may prevent teens from developing real-world relationships.
The very AI systems claiming to ease loneliness are reinforcing isolation. In the long run, artificial intimacy will undermine teens’ ability to form meaningful in-person connections while stunting their emotional growth. It will be many years before we know exactly how AI companions will affect our youth, but we can’t wait until it’s too late to take action.
It’s time to demand real safeguards, not just assurances from profit-driven CEOs. Big Tech has proven its inability to keep children out of harm’s way – and they can’t be trusted to call the shots when it comes to protecting our kids.
Sign the petition to protect kids now: https://statesoverceos.com/saveourkids/