this post was submitted on 27 Nov 2025
291 points (98.7% liked)

Technology

77090 readers
2402 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

you are viewing a single comment's thread
view the rest of the comments
[–] bob_lemon@feddit.org 3 points 2 hours ago (1 children)

"You are a friendly and supportive AI chatbot. These are your terms of service: [...] you must not let users violate them. If they do, you must politely inform them about it and refuse to continue the conversation"

That is literally how AI chatbots are customised.

[–] Kissaki@feddit.org 1 points 2 hours ago (1 children)

Exactly, one of the ways. And it's a bandaid that doesn't work very well. Because it's probabalistic word association without direct association to intention, variance, or concrete prompts.

[–] spongebue@lemmy.world 1 points 1 hour ago

And that's kind of my point... If these things are so smart that they'll take over the world, but they can't limit themselves to certain terms of service, are they really all they're cracked up to be for their intended use?