this post was submitted on 03 Mar 2025
883 points (99.3% liked)
Technology
65819 readers
4936 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?
Me: I want you to lie to me about something.
ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.
Still not a lie still text that is statistically likely to fellow prior text produced by a model with no thought process that knows nothing
Lie falsehood, untrue statement, while intent is important in a human not so much in a computer which, if we are saying can not lie also can not tell the truth
We aren't computers we are people. We are having this discussion about the computer. The computer given a massive corpus of input is about to discern that the following text and responses are statistically likely to follow one another
The computer doesn't "know" foo it has no model of foo or how it relates to bar. it just knows the statistical likelihood of = bar following the token foo vs other possible token. YOU the user introduced the token lie and foo != bar to it and it discerned that it admitting it was a likely response especially if the text foo = bar is only comparatively weakly related.
EG it will end up doubling down vs admitting more so when many responses contained similar sequences eg when its better supported by actual people's thoughts and words. All the smarts and the ability to think, to lie, to have any motivation whatsoever come from the people's words fed into the model. It isn't in any way shape or form intelligent. It can't per se lie, or even hallucinate. It has no thoughts and no intents.