this post was submitted on 17 Mar 2025
59 points (96.8% liked)

Technology

67422 readers
4422 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] zlatko@programming.dev 19 points 1 week ago (1 children)

I think not many people are aware of that. No matter how well you build the systems with this type of AI, they don't yet know. Now, maybe they're useful, maybe not, but this awareness that everything is actually just made up, by statistics and such, is lacking from peoples minds.

[โ€“] Voroxpete@sh.itjust.works 12 points 1 week ago

This is something I've been saying for a while now, because it really needs to be understood.

LLMs do not "sometimes hallucinate." Everything they produce is a hallucination. They are machines for creating hallucinations. The goal is that the hallucination will - through some careful application of statistics - align with reality.

But there's literally no feasible way that anyone has yet found to guarantee that.

LLMs were designed to effectively impersonate human interaction. They're actually pretty good at that. They take intelligence so well that it becomes really easy to convince people that they are in fact intelligent. As a model for passing the Turing test they're brilliant, but what they've taught us is that the Turing test is a terrible model for gauging the advancement of machine intelligence. Turns out, effectively reproducing the results a stupid human can achieve isn't all that useful for the most part.