this post was submitted on 27 Nov 2025
284 points (98.6% liked)

Technology

77090 readers
2459 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

all 35 comments
sorted by: hot top controversial new old
[–] Buffalox@lemmy.world 105 points 10 hours ago (5 children)

That's like a gun company claiming using their weapons for robbery is a violation of terms of service.

[–] DaddleDew@lemmy.world 94 points 10 hours ago* (last edited 6 hours ago) (2 children)

I'd say it's more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.

[–] Buffalox@lemmy.world 32 points 10 hours ago

Yes you are right, it's hard to find an analogy that is both as stupid and also sounds somewhat plausible.
Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that's exactly the problem, because it's the exact same for OpenAI, they cannot reasonably claim what they are claiming.

[–] vurr@lemmy.today 1 points 2 hours ago (1 children)

That would imply that he wasn't suicidal before. If chatgpt didn't exist he would just use Google.

[–] DaddleDew@lemmy.world 0 points 2 hours ago* (last edited 1 hour ago)

Look up the phenomenon called "Chatbot Psychosis". In its current form, especially with GPT4 that was specifically designed to be a manipulative yes-man, chatbots can absolutely insidiously mess up someone's head enough to push them to the act far beyond just answering the question of how to do it like a simple web search would.

[–] mriormro@lemmy.zip 7 points 9 hours ago (1 children)

If the gun also talked to you

[–] Capricorn_Geriatric@lemmy.world 1 points 6 minutes ago

Talked you into it*

[–] Whostosay@sh.itjust.works 3 points 7 hours ago (1 children)

Yeah this metaphor isn't even almost there

[–] notgold@aussie.zone 1 points 6 hours ago (1 children)

They used a tool against the manufacturers intended use of said tool?

[–] Whostosay@sh.itjust.works 2 points 6 hours ago (1 children)

I can't wrap my head around what I'm you're saying, and that could be due to drinking. Op later also talked about not being the best metaphor

[–] notgold@aussie.zone 4 points 3 hours ago

Metaphor isn't perfect but it's ok.

The gun is a tool as is an LLM. The companies that make these tools have intended use cases for the tools.

[–] gian@lemmy.grys.it 1 points 54 seconds ago

I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
Would be correct to sue the software company because a user violated the TOS ?

I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?

That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.

The gun company can rightfully say that what you do with your property is not their problem.
But let's make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?

[–] ryper@lemmy.ca 82 points 10 hours ago (5 children)

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

How the fuck is OpenAI's mission relevant to the case? Are suggesting that their mission is worth a few deaths?

[–] call_me_xale@lemmy.zip 48 points 9 hours ago

Sure looks like it.

Get fucked, assholes.

[–] frustrated_phagocytosis@fedia.io 30 points 9 hours ago

"All of humanity" doesn't include suicidal people, apparently.

[–] JasonDJ@lemmy.zip 11 points 9 hours ago* (last edited 9 hours ago)

I think they are saying that her suicide was for the benefit of all humanity.

Getting some Michelle Carter vibes...

[–] roofuskit@lemmy.world 11 points 5 hours ago

Tech Bros all think they are the saviors of humanity and they are owed every dollar they collect.

[–] Psythik@lemmy.world 5 points 2 hours ago

"Some of you may die, but that is a chance I am willing to take."

[–] spongebue@lemmy.world 29 points 10 hours ago (1 children)

So why can't this awesome AI be stopped from being used in ways that violate the TOS?

[–] just_another_person@lemmy.world 28 points 10 hours ago

Fucking.WOW.

Sam Altman just LOVES answering stupid questions. People should be asking him about this in those PR sprints.

[–] Reverendender@sh.itjust.works 25 points 10 hours ago

The police also violated my Terms of Service when they arrested me for that armed bank robbery I was allegedly committing. This is a serious problem in our society people; something must be done!

[–] NotMyOldRedditName@lemmy.world 24 points 5 hours ago* (last edited 4 hours ago)

The situation is tragic... their attempt to hide behind their ToS on that is fucking hilarious.

[–] fonix232@fedia.io 12 points 10 hours ago
[–] MourningDove@lemmy.zip 7 points 5 hours ago* (last edited 5 hours ago)

And open AI violates human culture and creativity. It’s a fucking shame that there are laws against this because that fucker should be up against the wall.

[–] the_q@lemmy.zip 6 points 8 hours ago

TOS > Everything.

[–] RememberTheApollo_@lemmy.world 6 points 4 hours ago

Well there you have it. It’s not the dev’s fault, it’s the AI’s fault. Just like they’d throw any other employee under the bus, even if it’s one they created.

[–] cmbabul@lemmy.world 3 points 1 hour ago

Just going through this thread and blocking anyone defending OpenAI or AI in general, your opinions are trash and your breath smells like boot leather

Does the synthesis of D-Lysergic Acid work against the terms of service if you ask for a mind-bending experience?

[–] Treczoks@lemmy.world 1 points 1 hour ago

Well, did anyone expect them to admit guilt?

[–] vacuumflower@lemmy.sdf.org 1 points 33 minutes ago

Modern version of "suicide is a sin and we don't condone it, but if you have problems you're devil-possessed and need to repent and have only yourself to blame".

Also probably could be countered by their advertising contradicting their ToS. Not a lawyer.