this post was submitted on 28 Oct 2025
382 points (97.5% liked)

Technology

75756 readers
7176 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Scolding7300@lemmy.world 220 points 3 days ago (7 children)

A reminder that these chats are being monitored

[–] whiwake@sh.itjust.works 71 points 3 days ago (10 children)

Still, what are they gonna do to a million suicidal people besides ignore them entirely

[–] WhatAmLemmy@lemmy.world 38 points 3 days ago (21 children)

Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

[–] FosterMolasses@leminal.space 9 points 3 days ago

There's evidence that a lot of suicide hotlines can be just as bad. You hear awful stories all the time of overwhelmed or fed up operators taking it out on the caller. There's some real evil people out there. And not everyone has access to a dedicated therapist who wants to help.

load more comments (20 replies)
[–] Scolding7300@lemmy.world 12 points 3 days ago (3 children)

Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

load more comments (3 replies)
load more comments (8 replies)
[–] dhhyfddehhfyy4673@fedia.io 30 points 3 days ago (3 children)

Absolutely blows my mind that people attach their real life identity to these things.

load more comments (3 replies)
[–] koshka@koshka.ynh.fr 6 points 2 days ago

I don't understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it's not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.

I've overshared more than I should but I always keep in mind to remember that there's always a risk of chats getting leaked.

Anything stored online can get leaked.

load more comments (4 replies)
[–] Zwuzelmaus@feddit.org 55 points 3 days ago (3 children)

over a million people talk to ChatGPT about suicide

But it still resists. Too bad.

load more comments (3 replies)
[–] Alphane_Moon@lemmy.world 47 points 3 days ago (1 children)

I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.

[–] Perspectivist@feddit.uk 34 points 3 days ago (1 children)
lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))  
lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))  
lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))

Add those to your adblocker custom filters.

load more comments (1 replies)
[–] lorski@sopuli.xyz 26 points 2 days ago

apparently ai is not very private lol

[–] mhague@lemmy.world 20 points 3 days ago (2 children)

I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn't open up and read a million chats... they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

[–] REDACTED@infosec.pub 8 points 3 days ago* (last edited 3 days ago) (4 children)

Every third chat now gets triggered, the ChatGPT is pretty broken lately. Just check out ChatGPT subreddit, its pretty much in chaos with moderators going for censorship of complaints. So many users are mad they made a megathread for it. I cancelled my subscription yesterday, it just turned into a cyberkaren

load more comments (4 replies)
load more comments (1 replies)
[–] minorkeys@lemmy.world 19 points 3 days ago (2 children)

And does ChatGPT make the situation better or worse?

[–] tias@discuss.tchncs.de 10 points 3 days ago (7 children)

The anti-AI hivemind here will hate me for saying it but I'm willing to bet $100 that this saves a significant number of lives. It's also indicative of how insufficient traditional mental health institutions are.

[–] atrielienz@lemmy.world 7 points 3 days ago

I'm going to say that while that's probably true there's something it leaves out.

For every life it saves it may just be postponing or causing the loss of other lives. This is because it's not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.

It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.

Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.

load more comments (6 replies)
[–] MagicShel@lemmy.zip 8 points 3 days ago

This is the thing. I'll bet most of those million don't have another support system. For certain it's inferior in every way to professional mental health providers, but does it save lives? I think it'll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.

The other question is how many people could access professional services but won't because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.

[–] myfunnyaccountname@lemmy.zip 18 points 3 days ago (2 children)

I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.

[–] voodooattack@lemmy.world 12 points 3 days ago (1 children)

in the US

It’s not just the US, it’s like that in most of the world.

[–] chronicledmonocle@lemmy.world 11 points 3 days ago (2 children)

At least in the rest of the world you don't end up with crippling debt when you try to get mental healthcare that stresses you out to the point of committing suicide.

load more comments (2 replies)
load more comments (1 replies)
[–] lemmy_acct_id_8647@lemmy.world 17 points 3 days ago* (last edited 3 days ago) (5 children)

I've talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I've low-key wanted to kill myself since I was 8 years old. For me it's just a part of life. For others it's usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

Yeah I don't trust it, but at the same time, for me it's better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.

[–] IzzyScissor@lemmy.world 13 points 3 days ago (1 children)

Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, "Yeah, but only in the fun ways", so he checked no. His wife got concerned that he joked about that and asked him what that meant. "Don't worry about it - it's not a problem."

load more comments (1 replies)
[–] BanMe@lemmy.world 7 points 2 days ago (2 children)

Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.

load more comments (2 replies)
load more comments (3 replies)
[–] ChaoticNeutralCzech@feddit.org 16 points 3 days ago* (last edited 3 days ago) (1 children)

The headline has two interpretations and I don't like it.

  • Every week, there is 1M+ users that bring up suicide
    • likely correct
  • There is 1M+ long-term users that bring up suicide at least once every week
    • my first thought
[–] atrielienz@lemmy.world 20 points 3 days ago (4 children)

My first thought was "Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT".

load more comments (4 replies)
[–] WhatsHerBucket@lemmy.world 15 points 3 days ago

I mean… it’s been a rough few years

[–] QuoVadisHomines@sh.itjust.works 15 points 3 days ago

Sounds like we should shut them down then to prevent a health crisis then.

[–] SabinStargem@lemmy.today 14 points 2 days ago (1 children)

Honestly, it ain't AI's fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn't done to make them feel good about life.

[–] KelvarCherry@lemmy.blahaj.zone 10 points 2 days ago (2 children)

Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there's a much bigger issue here.

Speaking from the USA, "suicidal ideation" is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It's so absurdly stupid I've just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.

load more comments (2 replies)
[–] Feddinat0r@feddit.org 13 points 3 days ago

So they want to play the strategy that they are relevant

[–] stretch2m@infosec.pub 13 points 2 days ago

Sam Altman is a horrible person. He loves to present himself as relatable "aw shucks let's all be pragmatic about AI" with his fake-ass vocal fry, but he's a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.

[–] i_stole_ur_taco@lemmy.ca 13 points 3 days ago

They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.

Im so done with ChatGPT. This AI boom is so fucked.

[–] Emilien@lemmy.world 12 points 2 days ago

There's so many people alone or depressed and ChatGPT is the only way for them to "talk" to "someone"... It's really sad...

[–] tehn00bi@lemmy.world 11 points 2 days ago

Bet some of them lost, or about to lose, their job to ai

[–] markovs_gun@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

"Hey ChatGPT I want to kill myself."

"That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

✅ Pros of committing suicide

  1. Ends pain and suffering.

  2. Eliminates the burden you are placing on your loved ones.

  3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

❎ Cons of committing suicide

  1. Committing suicide will make your friends and family sad.

  2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.

  3. You can't undo it. If you commit suicide, it is irreversible and you will not be able to go back

Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."

[–] ekZepp@lemmy.world 8 points 3 days ago

If ask suicide = true

Then message = "It seems like a good idead. Go for it 👍"

[–] IndridCold@lemmy.ca 8 points 2 days ago

I don't talk about ME killing myself. I'm trying to convince AI to snuff their own circuits.

Fuck AI/LLM bullshit.

[–] NuXCOM_90Percent@lemmy.zip 8 points 3 days ago

Okay, hear me out: How much of that is a function of ChatGPT and how much of that is a function of... gestures at everything else

MOSTLY joking. But had a good talk with my primary care doctor at the bar the other week (only kinda awkward) about how she and her team have had to restructure the questions they use to check for depression and the like because... fucking EVERYONE is depressed and stressed out but for reasons that we "understand".

[–] cerement@slrpnk.net 6 points 3 days ago

as long as prompts are cheaper than therapy …

[–] ChaoticNeutralCzech@feddit.org 6 points 3 days ago (3 children)

Apparently, "suicide" is also a disproportionally common search term on Bing as opposed to other search engines. What does that say about Microsoft?

[–] kami@lemmy.dbzer0.com 10 points 3 days ago

That they have a short term user base?

load more comments (2 replies)
[–] jordanlund@lemmy.world 5 points 3 days ago (1 children)

Globally?

So a 1 in 8,200 kind of thing?

[–] treadful@lemmy.zip 8 points 3 days ago

The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

load more comments
view more: next ›