this post was submitted on 03 Mar 2025
883 points (99.3% liked)
Technology
65819 readers
4952 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Jesus Christ, y'all. It's like Boomers trying to figure out the internet all over again. Just because AI (probably) can't lie doesn't mean it can't be earnestly wrong. It's not some magical fact machine; it's fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it's important to check people's sources yourself, robot or not.
No probably about it, it definitely can't lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It's very easy to pile a ton of them and it's much harder to attack you about any of them because they're much less consequent.
So in that view, a bullshitter doesn't give any shit about the truth, while a liar is a bit more "noble". 0
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it's very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.