this post was submitted on 08 Jun 2025
829 points (95.5% liked)

Technology

71292 readers
3686 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] communist@lemmy.frozeninferno.xyz 11 points 3 days ago* (last edited 3 days ago) (16 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments (16 replies)
[–] ZILtoid1991@lemmy.world 11 points 4 days ago (1 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

[–] TheFriar@lemm.ee 6 points 3 days ago (2 children)

Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

load more comments (2 replies)
[–] Blaster_M@lemmy.world 10 points 4 days ago* (last edited 3 days ago) (1 children)

Would like a link to the original research paper, instead of a link of a screenshot of a screenshot

load more comments (1 replies)
[–] melsaskca@lemmy.ca 9 points 3 days ago (1 children)

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

load more comments (1 replies)
[–] BlaueHeiligenBlume@feddit.org 8 points 4 days ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
[–] Harbinger01173430@lemmy.world 8 points 3 days ago

XD so, like a regular school/university student that just wants to get passing grades?

load more comments
view more: ‹ prev next ›