this post was submitted on 13 Jun 2025
864 points (99.3% liked)

Technology

71415 readers
2798 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] kratoz29@lemm.ee 4 points 21 hours ago (2 children)

If the customer support of my ISP doesn't even know what CGNAT is, but AI knows, I am actually troubled whether this is a good move or not.

[–] finitebanjo@lemmy.world 3 points 21 hours ago* (last edited 21 hours ago) (1 children)

See thats just it, the AI doesn't know either it just repeats things which approximate those that have been said before.

If it has any power to make changes to your account then its going to be mistakenly turning peoples services on or off, leaking details, etc.

[–] CeeBee_Eh@lemmy.world 2 points 20 hours ago (1 children)

it just repeats things which approximate those that have been said before.

That's not correct and over simplifies how LLMs work. I agree with the spirit of what you're saying though.

[–] finitebanjo@lemmy.world 1 points 20 hours ago (1 children)

You're wrong but I'm glad we agree.

[–] CeeBee_Eh@lemmy.world 2 points 17 hours ago (3 children)

I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

If you really want a better understanding, watch this video:

https://youtu.be/UKcWu1l_UNw

And before your next response starts with "but Apple..."

Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

Apple's paper on LLMs is completely biased in their favour.

load more comments (3 replies)
load more comments (1 replies)
[–] PattyMcB@lemmy.world 4 points 1 day ago

Ah... the flash in the pan is showing it's first signs of dying out

[–] CalipherJones@lemmy.world 4 points 21 hours ago

If I have to deal with AI for customer support then I will find a different company that offers actual customer support.

But but but, Daddy CEO said that RTO combined with Gen AI would mean continued, infinite growth and that we would all prosper, whether corposerf or customer!

[–] altima_neo@lemmy.zip 3 points 1 day ago

Ai hallucinates to fall much to be useful.

If you're gonna have a 24 hours chat bot to answer questions online, fine, but have people on the line ready to solve actual problems.

[–] andybytes@programming.dev 2 points 1 day ago

I don't deal with robots....

[–] sem@lemmy.blahaj.zone 2 points 1 day ago

How about governments?

[–] HugeNerd@lemmy.ca 1 points 1 day ago

Lol absence of feces?

load more comments
view more: ‹ prev next ›