this post was submitted on 03 Mar 2025
883 points (99.3% liked)
Technology
65819 readers
4952 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It knows the answer its giving you is wrong, and it will even say as much. I'd consider that intent.
It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.
It apologizes because math says that the most likely response is to apologize.
Edit: you can just ask it y'all
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
...how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.