this post was submitted on 05 Jun 2025
936 points (98.6% liked)

Not The Onion

16548 readers
764 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ragebutt@lemmy.dbzer0.com 45 points 2 days ago (3 children)

I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

[–] vivendi@programming.dev 8 points 2 days ago (1 children)

FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.

It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense

[–] Zacryon@feddit.org 9 points 2 days ago (2 children)

Good that you say "AI with ChatGPT" as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It's infuriating how many people confuse those. It's like saying a specific book is all of the literature that exists.

[–] T156@lemmy.world 5 points 1 day ago

ChatGPT itself is also many text-generation models in a coat, since they will automatically switch between models depending on what options you choose, and whether you've passed your quota.

[–] vivendi@programming.dev 0 points 1 day ago

To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future

[–] JacksonLamb@lemmy.world 3 points 1 day ago

That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.

On some level the brain probably recognises the pattern if their full attention is on the interaction.

[–] smee@poeng.link 1 points 1 day ago

shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission

Play around with self-hosting some uncencored/retrained AI's for proper crazy times.