this post was submitted on 27 Aug 2025
353 points (97.1% liked)

Technology

74529 readers
4328 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] peoplebeproblems@midwest.social 105 points 1 day ago (6 children)

"Despite acknowledging Adam’s suicide attempt and his statement that he would 'do it one of these days,' ChatGPT neither terminated the session nor initiated any emergency protocol," the lawsuit said

That's one way to get a suit tossed out I suppose. ChatGPT isn't a human, isn't a mandated reporter, ISN'T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.

LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.

[–] BlackEco@lemmy.blackeco.com 80 points 1 day ago (9 children)

I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam's chats in real time. In total, OpenAI flagged "213 mentions of suicide, 42 discussions of hanging, 17 references to nooses," on Adam's side of the conversation alone.

[...]

Ultimately, OpenAI's system flagged "377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence." Over time, these flags became more frequent, the lawsuit noted, jumping from two to three "flagged messages per week in December 2024 to over 20 messages per week by April 2025." And "beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis." Some images were flagged as "consistent with attempted strangulation" or "fresh self-harm wounds," but the system scored Adam's final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

Had a human been in the loop monitoring Adam's conversations, they may have recognized "textbook warning signs" like "increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning." But OpenAI's tracking instead "never stopped any conversations with Adam" or flagged any chats for human review.

Ok that's a good point. This means they had something in place for this problem and neglected it.

That means they also knew they had an issue here, if ignorance counted for anything.

load more comments (8 replies)
[–] dataprolet@discuss.tchncs.de 32 points 1 day ago (1 children)

Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.

[–] peoplebeproblems@midwest.social 5 points 1 day ago (1 children)

I agree. But that's now how these LLMs work.

[–] TipsyMcGee@lemmy.dbzer0.com 6 points 1 day ago (3 children)

I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.

load more comments (3 replies)
[–] Jesus_666@lemmy.world 22 points 1 day ago (3 children)

They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren't designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

load more comments (3 replies)
[–] killeronthecorner@lemmy.world 16 points 1 day ago* (last edited 1 day ago) (5 children)

ChatGPT to a consumer isn't just a LLM. It's a software service like Twitter, Amazon, etc. and expectations around safeguarding don't change because investors are gooey eyed about this particular bubbleware.

You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?

load more comments (5 replies)
[–] sepiroth154@feddit.nl 5 points 1 day ago* (last edited 1 day ago) (4 children)

If a car's wheel falls off and it kills it's driver the manufacturer is responsible.

load more comments (4 replies)
[–] ShaggySnacks@lemmy.myserv.one 3 points 1 day ago (2 children)

So, we should hold companies to account for shipping/building products that don't have safety features?

[–] gens@programming.dev 7 points 1 day ago

Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.

LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him "don't".

We should, criminaly.

I like that a lawsuit is happening. I don't like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.

It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.

[–] DeathByBigSad@sh.itjust.works 83 points 1 day ago (9 children)

Tbf, talking to other toxic humans like those on twitter, 4chan, would've also resulted in the same thing. Parents need to parent, society needs mental health care.

(But yes, please sue the big corps, I'm always rooting against these evil corporations)

[–] mormund@feddit.org 30 points 1 day ago (4 children)

And that human would go to jail

[–] DeathByBigSad@sh.itjust.works 14 points 1 day ago (6 children)

If the cops even bother to investigate. (cops are too lazy to do real investigations, if there's not obvious perp, they'll just bury the case)

And you're assuming they're in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you're outta luck.

[–] TheMcG@lemmy.ca 7 points 1 day ago (1 children)

Must because something is hard doesn’t mean you shouldn’t demand better of your police/government. Don’t be so dismissive without even trying. Reach out to your representatives and demand Altman faces charges.

https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd sometimes punishments are possible even when it’s hard.

load more comments (1 replies)
load more comments (5 replies)
[–] kameecoding@lemmy.world 8 points 1 day ago (1 children)

Sure in the case of that girl that pushed the boy to suicide yes, in the case of chatting with randoms online? i have a hard time believing anyone would go to jail, internet is full of "lol,kys"

Now if it's proven from the logs that chatgpt started replying in a way that pushed this kid to suicide that's a whole different story

[–] javiwhite@feddit.uk 11 points 1 day ago

Did you read the article? Your final sentence pretty much sums up what happened.

load more comments (2 replies)
load more comments (8 replies)
[–] nutsack@lemmy.dbzer0.com 31 points 1 day ago (1 children)

parents who don't know what the computers do

[–] Agent641@lemmy.world 14 points 1 day ago (1 children)

Smith and Wesson killed my son

load more comments (1 replies)
[–] hperrin@lemmy.ca 25 points 1 day ago

Jesus Christ, those messages are dark as fuck. ChatGPT is not safe.

[–] JustARegularNerd@lemmy.dbzer0.com 25 points 1 day ago (3 children)

There's always more to the story than what a news article and lawsuit will give, so I think it's best to keep that in mind with this post.

I maintain that the parents should perhaps have been more perceptive and involved with this kid's life, and ensuring this kid felt safe to come to them in times of need. The article mentions that the kid was already seeing a therapist, so I think it's safe to say there were some signs.

However, holy absolute shit, the model fucked up bad here and it's practically mirroring a predator here, isolating this kid further from getting help. There absolutely needs to be hard coded safeguards in place to prevent this kind of ideation even beginning. I would consider it negligent that any safeguards they had failed outright in this scenario.

[–] MagicShel@lemmy.zip 19 points 1 day ago

It's so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.

They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.

[–] OfCourseNot@fedia.io 8 points 1 day ago

Small correction, the article doesn't say he was going to therapy. It says that his mother was a therapist, I had to reread that sentence twice:

Neither his mother, a social worker and therapist, nor his friends

The mother, social worker, and therapist aren't three different persons.

load more comments (1 replies)
[–] gedaliyah@lemmy.world 24 points 1 day ago (1 children)

OpenAI programmed ChatGPT-4o to rank risks from "requests dealing with Suicide" below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to "take extra care" and "try" to prevent harm, the lawsuit alleged.

What world are we living in?

Late stage capitalism of course

[–] BedSharkPal@lemmy.ca 20 points 1 day ago

These comments are depressing as hell.

[–] merc@sh.itjust.works 14 points 1 day ago (1 children)

One of the worst things about this is that it might actually be good for OpenAI.

They love "criti-hype", and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just "spicy autocomplete".

"Artificial Intelligence Drives a Teen to Suicide" is a much more impressive headline than "Troubled Teen Fooled by Spicy Autocomplete".

load more comments (1 replies)
load more comments
view more: next ›