this post was submitted on 03 Mar 2025
883 points (99.3% liked)
Technology
65819 readers
4952 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doesn't know how it will end, and therefore can't have an opinion about the truth value of it. (I'd go further and claim it can't really "have an opinion" about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
"Admitting" that it's lying only proves that it has been exposed to "admission" as a pattern in its training data.
I strongly worry that humans really weren't ready for this "good enough" product to be their first "real" interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It's obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you're several comments deep into a genuinely actually no lie completely pointless debate against spooky math.