this post was submitted on 25 Nov 2025
668 points (98.8% liked)
Technology
77072 readers
3010 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is an technology, and like any technology, it improves. The AI we had two years ago was something akin to the Orville flier, the ones we have now are equivalent of a biplane. Those examples of technology weren't very useful, but the planes that followed were far more capable and economical.
Your assertions that AI is useless, is merely burying your head in the sand and hoping things will go alright. The outright refusal of AI by people like you, only ensures the most evil people can use it. This is like only allowing Nazis to own guns, peasants not being allowed to own land, or newspapers to only be owned by the wealthiest.
It is power that you are giving up, and power doesn't care about who has it.
Hallucinations are an intrinsic part of how LLMs work. OpenAI, literally the people with the most to lose if LLMs aren't useful, has admitted that hallucinations are a mathematical inevitability, not something that can be engineered around. On top of that, its been shown that for things like mathematical proof finding switching to more sophisticated models doesn't make them more accurate, it just makes their arguments more convincing.
Now, you might say "oh but you can have a human in the loop to check the AIs work", but for programming tasks its already been found that using LLMs makes programmers less productive. If a human needs to go over everything an AI generates, and reason about it anyway, that's not really saving time or effort. Now consider that as you make the LLM more complex, having it generate longer and more complicated blocks of text, its errors also become harder to detect. Is that not just shuffling around the necessary human brainpower for a task instead of reducing it?
So, in what field is this sort of thing useful? At one point I was hopeful that LLMs could be used in text summarization, but if I have to read the original text anyway to make sure that I haven't been fed some highly convincing falsehood then what is the point?
Currently I'm of the opinion that we might be able to use specialized LLMs as a heuristic to narrow the search tree for things like SAT solvers and answer set generators, but I don't have much optimism for other use cases.