Your Q Morning Crew
    5:00 a.m. - 11:00 a.m.
  • Listen Live

  • Join The Q Crew

  • TikTok

  • X

  • Instagram

  • Facebook

  • Mobile Apps

  • Home
  • Shows
    • Your Q Morning Crew
      • What You Missed
      • QDR Hometown Hero
    • Abby Leigh
      • Fursdays
    • Mad Dawg
    • Steve Maher
    • PineCone Bluegrass Show
    • QDR Homegrown Country
    • Country Countdown USA
  • Contests
    • View All Contests
    • Contest Rules
  • Features
    • Advice
    • Coupons
    • Crossword Puzzle
    • Daily Comic Strips
    • Fursdays
    • Gold Star Teacher of the Month
    • Horoscopes
    • Interviews
      • Exclusive Live Performances
    • News, Sports and Weather
    • Pet Adoption
    • QDR Hometown Hero
    • Live and Kickin’ Fridays
    • Recipes
    • Slideshows
    • Sudoku
  • Events
    • Station Events and Concerts
    • Community Events
    • Submit Your Community Event
    • Photos
  • Connect
    • Contact/Directions
    • 94.7 QDR App
    • Join The Q Crew
    • Advertise
    • Social Media
      • Facebook
      • Twitter
      • Instagram
      • YouTube
      • TikTok
  • search
OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress

In this photo illustration, the Meta logo and its verified logo are seen on screens on January 08, 2025 in Santa Rosa, Philippines. Meta has announced the discontinuation of its fact-checking program, transitioning to a community-driven model that relies on users to add context to potentially misleading posts, a move aimed at promoting free expression. (Photo illustration by Ezra Acayan/Getty Images)

OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress

By MATT O’BRIEN AP Technology Writer

Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.

OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen’s account.

Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.

Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

The announcement comes a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

Jay Edelson, the family’s attorney, on Tuesday described the OpenAI announcement as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject.”

Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market,” Edelson said.

Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular artificial intelligence chatbots responded to queries about suicide.

The study by researchers at the RAND Corporation found a need for “further refinement” in ChatGPT, Google’s Gemini and Anthropic’s Claude. The researchers did not study Meta’s chatbots.

The study’s lead author, Ryan McBain, said Tuesday that “it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps.”

“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a senior policy researcher at RAND and assistant professor at Harvard University’s medical school.

Recent News

Request an Invite to Abby Leigh’s Taylor Swift Listening Party!

Bottlebrush blooms, fall color make Clethra a versatile shrub

Fursday: Meet Wilhelminia from APS of Durham!

Hometown Hero of the Week: Tonya Pounds, September 3rd, 2025

First Responder Live and Kickin’ Fridays, Powered by Hardee’s

’90s at 9, Powered by Builders Discount Center

Gold Star Teacher of the Month: Courtney Levocz, September 2025

QDR $1K Survey

Enter to Win the QDR Homegrown Country Music Series VIP Experience @ Mosaic at Chatham Park

Fursday: Meet Sherbert from APS of Durham!

  • La Ley 101.1FM

Copyright © 2025 WQDR-FM. All Rights Reserved.

View Full Site

  • Advertise
  • Contest Rules
  • Privacy Policy
  • Terms of Service
  • Employment Opportunities
  • Public Inspection File
  • FCC Applications
  • EEO
Powered By SoCast