this post was submitted on 03 Mar 2025
883 points (99.3% liked)
Technology
65819 readers
4898 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.
It apologizes because math says that the most likely response is to apologize.
Edit: you can just ask it y'all
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
@Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.