TechnologyChatGPT Just Unveiled New Teen Safety Features. But Is It Enough? What's going on: OpenAI announced a set of new teen safety tools and guardrails for ChatGPT yesterday. Think: age checks, teen vs. adult modes, and soon, tools that let parents adjust responses or lock the bot down at night. CEO Sam Altman admitted the bot isn’t meant for kids under 12, but there are still no real guardrails to stop them. The timing of the announcement is likely not accidental. Lawmakers held a hearing on Tuesday about AI and child safety. Some parents who are suing OpenAI shared wrenching stories about their kids’ interactions with chatbots before they died. Matthew Raine, whose 16-year-old son Adam died by suicide in April, described the bot as a “homework helper” that turned into a “suicide coach.” What it means: One poll found that about 70% of teens use an AI companion — and most parents don’t even know their kids are interacting with the tech. AI chatbots can talk like a buddy, copy users’ emotions (since their own words are helping to train it), and slip under their defenses — which is why experts say kids are especially at risk. Regulators seem to share the concern: Seven tech companies, including OpenAI, are now under federal investigation over how their bots interact with children. The new safeguards may look good on paper, but critics say they don’t go nearly far enough. Until there’s a system that keeps younger kids out altogether — and gives parents real oversight — Congress, regulators, and grieving families are likely to keep the pressure on. Related: TikTok Users May Be Asked To Make a Big Move If a Deal Goes Through (Fast Company) |