this post was submitted on 01 Jun 2025
261 points (96.1% liked)

Technology

70942 readers
3585 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I found the aeticle in a post on the fediverse, and I can't find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don't "know" how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 8 points 6 days ago (1 children)
[–] ohwhatfollyisman@lemmy.world 7 points 6 days ago (1 children)

but this article espouses that llms do the opposite of logic, planning, and reasoning?

quoting:

Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,

are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?

[–] theunknownmuncher@lemmy.world 6 points 6 days ago

No, you're misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that "sounds" right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can't ask an LLM to explain its own reasoning. But the article shows how they've made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just "autocomplete"