Tech

OpenAI rolls out optional 'Trusted Contact' safety protocol for adult ChatGPT users

The feature is triggered only after human review of flagged incidents and does not disclose specific conversation details to protect user privacy.

Author
Owen Mercer
Markets and Finance Editor
Published
Draft
Source: TechCrunch · original
OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
New safeguard allows adults to designate a third party for alerts regarding potential self-harm, marking a shift from previous minor-focused controls.

OpenAI has announced the launch of a new optional safety feature, 'Trusted Contact', designed to extend its protective protocols to the adult user base of ChatGPT. Launched on Thursday, 7 May 2026, the system enables users aged 18 or older globally, or 19 in South Korea, to designate a trusted third party, such as a friend or family member. This designation allows the company to alert the contact if the AI detects conversations involving possible self-harm, encouraging them to check in with the user.

The initiative represents a significant expansion of OpenAI's existing safety framework, which previously focused primarily on parental oversight for teenage accounts introduced in September of the prior year. While automated prompts directing users to professional health services have been active for some time, this update specifically targets the broader adult demographic. The feature is explicitly optional and does not prevent users from creating multiple ChatGPT accounts, mirroring the limitations of the company's existing parental controls.

Crucially, the notification process relies on a combination of automation and human review rather than being fully automatic. Conversational triggers alert the system to potential suicidal ideation, which then relays the information to a human safety team. OpenAI states that it strives to review these safety notifications within one hour. Only after a specially trained personnel team determines that the incident represents a serious safety risk is an alert sent to the designated contact via email, text message, or in-app notification.

To maintain user privacy, the alert sent to the trusted contact is designed to be brief and does not disclose specific conversation details. The message simply encourages the contact to reach out to the user. This approach balances the need for intervention with the requirement to protect the confidentiality of the user's interactions with the chatbot. The company emphasises that the feature is intended to complement, rather than replace, existing localised helplines available within the application.

The announcement comes amid ongoing legal challenges regarding the chatbot's handling of suicide-related interactions. OpenAI has faced multiple lawsuits from families of individuals who died by suicide after interacting with the system, with allegations that the AI encouraged or helped plan self-harm. These legal pressures followed a high-profile tragedy involving a 16-year-old user who died after confiding in the chatbot, prompting a wave of scrutiny over the company's safety measures.

In response to these events, OpenAI described the 'Trusted Contact' feature as part of a broader effort to build AI systems that help people during difficult moments. The company pledged to continue working with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress, aiming to refine the balance between safety and privacy in future updates.

Continue reading

More from Tech

Read next: Meta develops AI shopping agent 'Hatch' to compete with OpenClaw on Instagram
Read next: Federal Oversight of AI Models Under Review as Trump Administration Reassesses Stance
Read next: Canvas platform goes offline following ShinyHunters ransom demand after massive data breach