this post was submitted on 21 May 2025
937 points (97.8% liked)
Technology
70259 readers
4035 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because it test what you actually retained, not what you can convince an AI to tell you.
But what good is that if AI can do it anyway?
That is the crux of the issue.
Years ago the same thing was said about calculators, then graphing calculators. I had to drop a stat class and take it again later because the dinosaur didn't want me to use a graphing calculator. I have ADD (undiagnosed at the time) and the calculator was a big win for me.
Naturally they were all full of shit.
But this? This is different. AI is currently as good as a graphing calculator for some engineering tasks, horrible for some others, excellent at still others. It will get better over time. And what happens when it's awesome at everything?
What is the use of being the smartest human when you're easily outclassed by a machine?
If we get fully automated yadda yadda, do many of us turn into mush-brained idiots who sit around posting all day? Everyone retires and builds Adirondack chairs and sips mint juleps and whatever? (That would be pretty sweet. But how to get there without mass starvation and unrest?)
Alternately, do we have to do a Butlerian Jihad to get rid of it, and threaten execution to anyone who tries to bring it back... only to ensure we have capitalism and poverty forever?
These are the questions. You have to zoom out to see them.
Because if you don't know how to tell when the AI succeeded, you can't use it.
To know when it succeeded, you must know the topic.
The calculator is predictable and verifiable. LLM is not
I'm not sure what you're implying. I've used it to solve problems that would've taken days to figure out on my own, and my solutions might not have been as good.
I can tell whether it succeeded because its solutions either work, or they don't. The problems I'm using it on have that property.
That says more about you.
There are a lot of cases where you can not know if it worked unless you have expertise.
This still seems too simplistic. You say you can't know whether it's right unless you know the topic, but that's not a binary condition. I don't think anyone "knows" a complex topic to its absolute limits. That would mean they had learned everything about it that could be learned, and there would be no possibility of there being anything else in the universe for them to learn about it.
An LLM can help fill in gaps, and you can use what you already know as well as credible resources (e g., textbooks) to vet its answer, just as you would use the same knowledge to vet your own theories. You can verify its work the same way you'd verify your own. The value is that it may add information or some part of a solution that you wouldn't have. The risk is that it misunderstands something, but that risk exists for your own theories as well.
This approach requires skepticism. The risk would be that the person using it isn't sufficiently skeptical, which is the same problem as relying too much on their own opinions or those of another person.
For example, someone studying statistics for the first time would want to vet any non-trivial answer against the textbook or the professor rather than assuming the answer is correct. Answer comes from themself, the student in the next row, or an LLM, doesn't matter.
The problem is offloading critical thinking to a blackbox of questionably motivated design. Did you use it to solve problems or did you use it to find a sufficient approximation of a solution? If you can't deduce why the given solution works then it is literally unknowable if your problem is solved, you're just putting faith in an algorithm.
There are also political reasons we'll never get luxury gay space communism from it. General Ai is the wet dream of every authoritarian: an unverifiable, omnipresent, first line source of truth that will shift the narrative to whatever you need.
The brain is a muscle and critical thinking is trained through practice; not thinking will never be a shortcut for thinking.
It can't. It just fucking can't. We're all pretending it does, but it fundamentally can't.
https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-because-they-cannot-reason
Creative thinking is still a long way beyond reasoning as well. We're not close yet.
It can and it has done creative mathematical proof work. Nothing spectacular, but at least on par with a mathematics grad student.
Specialized AI like that is not what most people know as AI. Most people reffer to it as LLMs.
Specialized AI, like that showcased, is still decades away from generalized creative thinking. You can't ask it to do a science experiment with in a class because it just can't. It's only built for math proof.
Again, my argument is that it won't never exist.
Just that it's so far off it'd be like trying to regulate smart phone laws in the 90s. We would have only had pipe dreams as to what the tech could be, never mind its broader social context.
So tall to me when it can, in the case of this thread, clinically validated ways of teaching. We're still decades from that.
Show me a human that can do it.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C10&q=children+learning+from+humans#d=gs_qabs&t=1747921831528&u=%23p%3DDqyOK2jEfjQJ
EDIT: you can literally get a PhD in many forms of education and have an entire career studying it.
It's already capable of doing a lot, and there is reason to expect it will get better over time. If we stick our fingers in our ears and pretend that's not possible, we will not be prepared.
If you read, it's capable of very little under the surface of what it is.
Show me one that is well studied, like clinical trial levels, then we'll talk.
We're decades away at this point.
My overall point of it's just as meaningless to talk about now as it was in the 90s. Because we can't convince of what a functioning product will be, never mind it's context I'm a greater society. When we have it, we can discuss it then as we have something tangible to discuss. But where we'll be in decades is hard to regulate now.
Alpha Fold. We're not decades away. We're years at worst.
If you assume the unlimited power needed right now to power Aloha fold at scale of all human education.
We have at best proof of concepts that computers can talk. But LLMs don't have any way of actually knowing anything behind them. That's kinda the problem.
And it's not a "we'll figure out the one trick" but more fundamentally how it works doesn't allow for that to happen.
If you want to compare a calculator to an LLM, you could at least reasonably expect the calculator result to be accurate.
Why. Because you put trust into the producers of said calculators to not fuck it up. Or because you trust others to vet those machines or are you personally validating. Unless your disassembling those calculators and inspecting their chips sets your just putting your trust in someone else and claiming "this magic box is more trust worthy"
A combination of personal vetting via analyzing output and the vetting of others. For instance, the Pentium calculation error was in the news. Otherwise, calculation by computer processor is understood and the technology is acceptable to be used for cases involving human lives.
In contrast, there are several documented cases where LLM's have been incorrect in the news to a point where I don't need personal vetting. No one is anywhere close to stating that LLM's can be used in cases involving human lives.
How exactly do you think those instances got into the news in the first place. I'll give you a hint. People ARE vetting them and reporting when they're fucking up. It is a bias plain and simple. People are absolutely using Ai in cases involving humans.
https://www.nytimes.com/2025/03/20/well/ai-drug-repurposing.html
https://www.advamed.org/2024/09/20/health-care-ai-is-already-saving-lives/
https://humanprogress.org/doctors-told-him-he-was-going-to-die-then-ai-saved-his-life/
Your opinions are simply biased and ill-informed. This is only going to grow and become a larger and larger dataset. Just like the auto driving taxis. Everyone likes to shit on them while completely ignoring the truth and statistics. All while acting like THIS MOMENT RIGHT NOW is the best they're ever going g to get.
I didn't say AI, I said LLM.
It often is. I've got a lot of use out of it.