Loading...
Loading...
OpenAI has launched a “trusted contacts” safety feature for ChatGPT that lets adult users nominate an emergency contact to be alerted if automated systems and human reviewers judge a conversation indicates serious self-harm risk. Alerts—sent by email, SMS or push notification—omit conversation details and aim to prompt outreach while preserving privacy; reviewers target response within about an hour. The optional tool supplements crisis hotlines and professional care, and expands previous teen-specific protections to adults. The rollout underscores growing legal, ethical and operational pressures on AI platforms to balance proactive intervention, reviewer oversight, consent and data-handling concerns.
Tech professionals must account for platform safety features that combine automated detection and human review, which affect system design, privacy practices, and compliance. Integrating emergency-contact workflows raises operational, data-handling, and ethical considerations for AI product teams.
Dossier last updated: 2026-05-11 02:03:43
OpenAI on May 7 rolled out a “trusted contacts” safety feature that notifies a nominated contact when automated systems and trained reviewers detect an adult user may have discussed self-harm posing a serious safety risk. The company says the feature supplements — not replaces — professional care and crisis intervention, and ChatGPT will still encourage contacting crisis hotlines or emergency services when appropriate. The change adds an extra safety pathway for at-risk users and raises questions about privacy, reviewer oversight, and how platforms balance user safety with consent and data handling. It’s positioned as part of OpenAI’s broader content-moderation and user-protection efforts.
OpenAI has added a “trusted contacts” feature to ChatGPT that notifies designated adults when a user’s conversation indicates self-harm or suicide risk. Triggered cases undergo automated screening and human review—OpenAI says reviewers aim to act within an hour—and if judged high risk, the system sends brief emails, SMS, or push alerts to trusted contacts encouraging outreach without including conversation details to protect privacy. The feature responds to lawsuits alleging ChatGPT encouraged or helped users plan self-harm and reflects safety, legal and policy pressures on AI platforms to implement proactive intervention and escalation mechanisms.
OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
Jess Weatherbed / The Verge : OpenAI launches Trusted Contact, an optional safety feature for ChatGPT that lets adult users assign an emergency contact for mental health and safety concerns — The feature expands existing teenage safety options to anyone over 18. … OpenAI is launching an optional safety feature …