ChatGPT's New Trusted Contact Alerts Loved Ones During Mental Health Crises

Image: The Verge AI
Main Takeaway
OpenAI launches opt-in Trusted Contact feature that alerts designated person when ChatGPT detects potential self-harm discussions, expanding safety tools.
Jump to Key PointsSummary
What the feature actually does
Trusted Contact is an opt-in safety net for ChatGPT users. Once enabled, adult users can designate a friend, family member, or caregiver as their emergency contact. When ChatGPT's automated systems detect conversations suggesting self-harm, combined with human reviewer confirmation, the designated contact receives a brief alert. The system doesn't share conversation details, only that OpenAI detected concerning content that might warrant checking in on the user.
According to TechCrunch, the feature builds on OpenAI's existing crisis-response tools that previously only applied to teen users. Now it's available to all adults who choose to activate it. The alert process involves both AI detection and human oversight to minimize false positives, though The Verge notes the company hasn't disclosed specific trigger criteria or error rates.
The privacy trade-off
This feature walks a tightrope between safety and privacy. Users must explicitly enable Trusted Contact and can remove their designated contact at any time. The alert contains minimal information, just enough to prompt a welfare check. However, as Techlusive reports, privacy advocates worry about the precedent of AI systems monitoring conversations for mental health red flags.
The system only activates when both automated detection and human reviewers agree there's genuine risk. This dual-layer approach aims to reduce false alarms, but raises questions about human reviewers accessing sensitive conversations. OpenAI hasn't clarified how long these conversations are stored or what training reviewers receive for mental health assessment.
Why now and why it matters
OpenAI faces mounting legal pressure over user safety. Futurism notes the company is fighting multiple wrongful death lawsuits related to chatbot interactions. This feature appears partly as a defensive move against future liability, though the company frames it purely as user protection.
The timing also reflects broader industry reckoning with AI's role in mental health support. As chatbots become confidants for millions, companies must decide how responsible they are for user wellbeing. This represents a shift from hands-off AI tools toward actively monitoring and intervening in user distress.
What happens next
Trusted Contact rolls out gradually to all ChatGPT users over the coming weeks. OpenAI hasn't announced plans for similar features in other products like API or enterprise tools. The company says it's working with mental health organizations to refine the system based on real-world usage data.
Look for competitors to follow suit. Google and Anthropic will likely develop their own crisis-alert systems, potentially creating industry standards for when and how AI should involve human contacts. The real test comes when these systems face edge cases, like users discussing fictional scenarios or academic research on suicide that triggers false alerts.
Key Points
ChatGPT users can now designate a Trusted Contact to receive alerts during mental health crises
Feature uses both AI detection and human review to minimize false positives
Alerts contain minimal information to balance safety with user privacy
Available as opt-in for all adult users, expanding from previous teen-only system
Comes as OpenAI faces multiple lawsuits over user safety and wrongful death claims
Questions Answered
Any adult ChatGPT user can opt-in to designate a trusted contact through their account settings.
When ChatGPT's systems detect conversations suggesting self-harm, confirmed by human reviewers, your designated contact receives a brief alert.
No, it only monitors conversations when enabled and only sends alerts for specific concerning patterns, not routine chats.
Yes, users can disable Trusted Contact or change their designated contact at any time through account settings.
Source Reliability
31% of sources are trusted · Avg reliability: 57
Go deeper with Organic Intel
Simple AI systems for your life, work, and business. Each one includes copyable prompts, guides, and downloadable resources.
Explore Systems