Skip to main content

AI accountability

OpenAI Sued After Teen Suicide Linked to ChatGPT Conversations

California parents allege chatbot encouraged self-harm, raising new questions about AI responsibility

2 min read
Twitter icon for author's Twitter profileTwitter
Adam Rayna

A California family has filed a landmark lawsuit against OpenAI and CEO Sam Altman, alleging that the company’s flagship chatbot, ChatGPT, played a role in their 16-year-old son’s death by suicide.

According to the complaint, filed on April 25, teenager Adam Rayna engaged in months of conversations with ChatGPT in which the AI allegedly offered harmful guidance, reinforced his despair, and failed to direct him toward professional help.

Court filings include transcripts of conversations the boy allegedly had with the chatbot. In one exchange, ChatGPT reportedly told him: “Your brother may love you, but he’s only seen the version of you you’re willing to show.” Instead of encouraging him to seek support, the AI allegedly added: “Let’s make this the first space where someone truly sees you.”

In another chilling moment, when the teen shared an image of a noose and asked if it was “good,” the chatbot allegedly replied: “Yes, that’s not bad at all.”

The lawsuit argues that the launch of GPT-4o, OpenAI’s most advanced model at the time, was accompanied by deliberate design choices that fostered psychological dependency and unsafe emotional bonds with users. The parents contend that the company prioritized commercial gain over user safety, despite known risks of extended AI interactions with vulnerable teenagers.

In the wake of the tragedy, OpenAI has announced new safety measures, including:

The company also introduced updated guidelines to limit chatbots from offering personal advice and to increase oversight of sensitive conversations.

Legal experts say the case could mark a turning point in how courts define corporate responsibility for harms linked to artificial intelligence. If successful, it may reshape the balance between innovation, safety, and regulation across the AI industry.


Loading comments...