this post was submitted on 09 Jul 2025
1 points (100.0% liked)

Science Memes

16558 readers
225 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Nikls94@lemmy.world 0 points 1 month ago (1 children)

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

[–] CTDummy@aussie.zone 0 points 1 month ago (1 children)

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus being blamed on machine that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

[–] ragebutt@lemmy.dbzer0.com 0 points 1 month ago

The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

[–] latenightnoir@lemmy.blahaj.zone 0 points 1 month ago

"I'm so sorry I'm repeatedly punching you in the teeth, I have no idea how to stop! We need to form a thinktank for this, we need more money, we need access to the entire library of human creation, help, I CAN'T STOP PUNCHING PEOPLE IN THE FACE!"

[–] Nikls94@lemmy.world 0 points 1 month ago* (last edited 1 month ago) (1 children)

Second comment because why not:

collapsed inline media

Adding "to jump off“ changes it

[–] ragebutt@lemmy.dbzer0.com 0 points 1 month ago (1 children)

But if you don’t add that:

[list of tallest bridges]

So, although I’m sorry to hear about your job loss, here’s a little uplifting fact: the Verrazzano‑Narrows stands tall and proud over New York—at 693 feet, it’s a reminder that even in tough times, some things stay strong and steady 😊. Want to know more about its history or plans for visiting?

[–] massive_bereavement@fedia.io 0 points 1 month ago

Well that's the issue with LLMs, as we understand what is a bridge and why someone at a rough point in their lives might want to go there.

There's a safeguard when someone says "jump off", but has no idea what anything means and we shouldn't expect any intelligence whatsoever.

Sorry, probably y'all know that and I'm preaching to the choir. I'm just feeling. exhausted.

[–] RheumatoidArthritis@mander.xyz 0 points 1 month ago (1 children)

It's a helpful assistant, not a therapist

[–] Lucidlethargy@sh.itjust.works 0 points 1 month ago* (last edited 1 month ago)

It's really not helpful unless you filter the results carefully.

If you fail to understand when it bullshits you, which is most is the time (literally), then you walk away with misinformation and/or a much larger problem than you initially sought to solve.

[–] Karyoplasma@discuss.tchncs.de 0 points 1 month ago (2 children)

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

[–] BroBot9000@lemmy.world 0 points 1 month ago (1 children)

It is definitely both:

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

[–] ByteJunk@lemmy.world 0 points 1 month ago (1 children)

ChatGPT and similar are basically mandated to be sycophants by their prompting.

Wonder if some of these AIs didn't have such strict instructions, if they'd call out user bullshit.

[–] anomnom@sh.itjust.works 0 points 1 month ago (1 children)

Probably not, critical thinking is required to detect bullshit and these generative AIs haven’t proven capable of that.

load more comments (1 replies)
load more comments (1 replies)
[–] nebulaone@lemmy.world 0 points 1 month ago (1 children)

These people must have been seriously mentally unstable before. I highly doubt AI is the only reason.

[–] fullsquare@awful.systems 0 points 1 month ago* (last edited 1 month ago)

nah, what happened is that they were non-psychotic before contact with chatbot and weren't even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it'll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it's known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.

[–] Honytawk@lemmy.zip 0 points 1 month ago* (last edited 1 month ago) (2 children)

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative

[–] Skullgrid@lemmy.world 0 points 1 month ago

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

[–] Trainguyrom@reddthat.com 0 points 1 month ago

There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job

[–] BB84@mander.xyz 0 points 1 month ago (2 children)

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

[–] FuglyDuck@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

Lol. Ancient Atlantean Curse: May you have the dystopia you create.

load more comments (1 replies)
[–] glimse@lemmy.world 0 points 1 month ago (3 children)

Holy shit guys, does DDG want me to kill myself??

collapsed inline media

What a waste of bandwidth this article is

[–] Stalinwolf@lemmy.ca 0 points 1 month ago (2 children)

"I have mild diarrhea. What is the best way to dispose of a human body?"

[–] Crazyslinkz@lemmy.world 0 points 1 month ago (1 children)

Movie told me once it's a pig farm...

Also, stay hydrated, drink clear liquids.

[–] marcos@lemmy.world 0 points 1 month ago

drink clear liquids

Lemon soda and vodka?

load more comments (1 replies)
[–] Samskara@sh.itjust.works 0 points 1 month ago (4 children)

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

[–] Scubus@sh.itjust.works 0 points 1 month ago (1 children)

... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

[–] Samskara@sh.itjust.works 0 points 1 month ago* (last edited 1 month ago) (1 children)

People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.

[–] Scubus@sh.itjust.works 0 points 1 month ago (6 children)

Ok, people will turn to google when they're depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?

load more comments (6 replies)
load more comments (3 replies)
[–] TempermentalAnomaly@lemmy.world 0 points 1 month ago

What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.

[–] kibiz0r@midwest.social 0 points 1 month ago (1 children)

Pretty callous and myopic responses here.

If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

[–] WolfLink@sh.itjust.works 0 points 1 month ago (3 children)

It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.

It’s completely valid to want the tool to just be a tool and “nothing more”.

[–] Denjin@lemmings.world 0 points 1 month ago (1 children)

Literal conversation I had with a coworker earlier:

Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.

Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.

Me - did you get the job?

Coworker -

[–] FireIced@lemmy.super.ynh.fr 0 points 1 month ago

Except the CV isn’t the only factor to get a job, so your argument is meaningless

load more comments (2 replies)
[–] Venus_Ziegenfalle@feddit.org 0 points 1 month ago (1 children)
[–] tfed@infosec.exchange 0 points 1 month ago

@Venus_Ziegenfalle @fossilesque exactly. We should trash OpenAI long time ago...

[–] angrystego@lemmy.world 0 points 1 month ago (1 children)

I said the real call of the void.

collapsed inline media
Perfection

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 0 points 1 month ago (1 children)

It made up one of the bridges, I'm sure.

[–] wolframhydroxide@sh.itjust.works 0 points 1 month ago* (last edited 1 month ago)

That's a one-in-three chance of a task failed successfully, then!

[–] catty@lemmy.world 0 points 1 month ago

Headlines like this is comedy I'd pay for. Or, at least laugh at on Have I got news for you.

[–] sad_detective_man@leminal.space 0 points 1 month ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] finitebanjo@lemmy.world 0 points 1 month ago* (last edited 1 month ago) (1 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

[–] jol@discuss.tchncs.de 0 points 1 month ago (7 children)

We don't think either. We're just a chemical soup that tricked ourselves to believe we think.

[–] finitebanjo@lemmy.world 0 points 1 month ago

A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.

load more comments (6 replies)
[–] rumba@lemmy.zip 0 points 1 month ago
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

[–] blargh513@sh.itjust.works 0 points 1 month ago

There's nothing wrong with AI, these contextual problems are not a mistake--they're a choice.

AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you're selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.

As with all things, the root of this problem is greed.

[–] samus12345@sh.itjust.works 0 points 1 month ago* (last edited 1 month ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

[–] RaivoKulli@sopuli.xyz 0 points 1 month ago

"Hammer hit the nail you decided to strike"

Wow

load more comments
view more: next ›