this post was submitted on 08 Jun 2025
834 points (95.5% liked)
Technology
72285 readers
2390 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.
As Larry Tesler puts it, "AI is whatever hasn't been done yet.".
Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.
Any reasoning human would have understood that question to be referring to the tension in the strings.
Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.
Once again a reasoning human would assume the question is about the mineral.
Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.
I'm not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I'd expect someone asking about kidney stones would also be asking about foods that are commonly consumed.
This kind of just goes to show there's multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely "good" at making up answers to leading questions, even if it's completely false.
Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.
The one thing they can't do is verify if what they are talking about is true as it's all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.