this post was submitted on 30 Dec 2025
699 points (98.9% liked)
Technology
78098 readers
2975 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?
Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don't use them for my coding job.
Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that
manualis a keyword in PostgreSQL, which is not.What is a hallucination if not AI being confidently mistaken by making up something that is not true?
It's more the hallucinations are due to the fact we have trained them to be unable to admit to failure or incompetence.
Humans have the exact same "hallucinations" if you give them a job then tell them they aren't allowed to admit to not knowing something ever for any reason.
You end up only with people willing to lie, bullshit and sound incredibly confident.
We literally reinvented the politician with LLMs.
None of the big models are trained to be actually accurate, only to give results no matter what.
I use them at work to get instructions on running processes and no matter how detailed I am "It is version X, the OS is Y" it still gives me commands that don't work on my version, bad error code analysis, etc.