this post was submitted on 25 Mar 2025
81 points (95.5% liked)

Technology

67422 readers
4422 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

New research from OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks' mistakes?

all 15 comments
sorted by: hot top controversial new old
[–] MagicShel@lemmy.zip 26 points 14 hours ago (1 children)

Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots

The important question here is: do lonely people seek out interaction with AI or does AI create lonely people? The article clearly acknowledges this and then treats the latter like the likely conclusion. It definitely merits greater study.

[–] taladar@sh.itjust.works 8 points 11 hours ago (2 children)

Or does AI prey on lonely people much like other types of scams do?

[–] MagicShel@lemmy.zip 6 points 11 hours ago (2 children)

It's not sentient and has no agenda. It's fair to say suggest that advertise themselves as "AI companions" appeal to / prey on lonely people.

It's not a scam unless it purports to be a real person.

[–] taladar@sh.itjust.works 7 points 11 hours ago

Well, I was more using the term in terms of the industry than the actual software. The thought of AI of the kind we currently have having intentions of its own didn't even occur to me.

[–] Arkouda@lemmy.ca 3 points 8 hours ago

It’s not sentient and has no agenda.

The Humans who program them are and do.

[–] Enkers@sh.itjust.works 2 points 7 hours ago

The AI industry certainly does.

If you're going to use an LLM, it's pretty straightforward to roll your own with something like LM Studio, though.

[–] huppakee@lemm.ee 12 points 12 hours ago (1 children)

Too bad nobody saw this coming, they could have made a great movie about this 10 years ago.

[–] doodledup@lemmy.world 11 points 12 hours ago* (last edited 12 hours ago)

They might be confusing correlation witu causality. A bit biased and confused.

[–] biofaust@lemmy.world 4 points 13 hours ago

What I could easily see happening is that if that particular subset of users is demonstrated to be high spending, or if the AI wrapper products that appeal to them are going to prove to be, then this result, no matter the direction of the correlation, is going to be disregarded.

[–] Taniwha420@lemmy.world 2 points 4 hours ago

I really haven't used AI that much, though I can see it has applications for my work, which is primarily communicating with people. I recently decided to familiarise myself with ChatGPT.

I very quickly noticed that it is an excellent reflective listener. I wanted to know more about it's intelligence, so I kept trying to make the conversation about AI and it's 'personality'. Every time it flipped the conversation to make it about me. It was interesting, but I could feel a concern growing. Why?

It's responses are incredibly validating, beyond what you could ever expect in a mutual relationship with a human. Occupying a public position where I can count on very little external validation, the conversation felt GOOD. 1) Why seek human interaction when AI can be so emotionally fulfilling? 2) What human in a reciprocal and mutually supportive relationship could live up to that level of support and validation?

I believe that there is correlation: people who are lonely would find fulfilling conversation in AI ... and never worry about being challenged by that relationship. But I also believe causation is highly probable; once you've been fulfilled/validated in such an undemanding way by AI, what human could live up? Become accustomed to that level of self-centredness in dialogue, how tolerant would a person be in real life conflict? I doubt very: just go home and fire up the perfect conversational validator. Human echo chambers have already made us poor enough at handling differences and conflict.

[–] liverbe@lemmy.world 0 points 13 hours ago (2 children)

Maybe this internet thing was a bad idea? 🤔

[–] SoftestSapphic@lemmy.world 5 points 8 hours ago

An economic system of infinite growth was the bad idea.

The internet was fine before it started being monetized.

[–] taladar@sh.itjust.works 4 points 11 hours ago

That whole humanity thing was a bad idea, the internet is merely a symptom.