OK, and? A car doesn't run like a horse either, yet they are still very useful.
I'm fine with the distinction between human reasoning and LLM "reasoning".
This is a most excellent place for technology news and articles.
OK, and? A car doesn't run like a horse either, yet they are still very useful.
I'm fine with the distinction between human reasoning and LLM "reasoning".
Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".
But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.
I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.
What’s the news? I don’t trust this guy if he thought it wasn’t known that AI is overdriven pattern matching.
It has so much data, it might as well be reasoning. As it helped me with my problem.
What a dumb title. I proved it by asking a series of questions. It’s not AI, stop calling it AI, it’s a dumb af language model. Can you get a ton of help from it, as a tool? Yes! Can it reason? NO! It never could and for the foreseeable future, it will not.
It’s phenomenal at patterns, much much better than us meat peeps. That’s why they’re accurate as hell when it comes to analyzing medical scans.