this post was submitted on 15 Jul 2025
319 points (93.9% liked)
Fediverse
35431 readers
728 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why the fuck do people ask ChatGPT for shit like this? ChatGPT doesn't know facts. It's a magic 8-ball with more words.
Asking chatgpt can be super useful to get info. I just don't understand why people don't try to verify what it says before just re-posting like fact.
For basic fact checking like this, it's basically useless. You'd have to go look it up to verify anyway, so it's just an extra step. There's use cases for it, but this isn't it
Explain AI in 10 words or less:
If you are just going to verify the info, why not just find out yourself and save yourself some time?
It depends on what info you're trying to find.
I was recently trying to figure out the name of a particular uncommon type of pipe fitting. I could describe what it looked like, but had no idea what it was called. I described it to chatgpt, which gave me a name, which I could then search for with a normal search engine to confirm that the name was correct. Sure enough, search results took me to plumbing supply companies selling it, with pictures that matched what I described.
But, asking it when a particular feature got added to a piece of software? There's no additional information one would get from the answer to help them confirm that the answer is correct.
ETA: The above strategy has also failed me many times, though, where chatgpt gives me information that follow-up searches only confirmed that chatgpt hallucinated the answer. Just wanted to say that to reinforce that you have to assume it's hallucinating until you get independent confirmation.
You should use something like perplexity instead that actually provides links to where it found the information. It will still make shit up but at least it's easier to tell when it is.
Why bother even using CGPT when you have to go elsewhere to verify everything it says anyway?
It depends on the type of facts, but sometimes it's much easier to verify an answer than to get the answer in the first place. For example sometimes the LLM will mention a keyword that you didn't know or didn't remember and that makes googling much easier.
The only thing it's useful at is shit that isn't necessary.
We had a P&Z member at the city I work at get butthurt because we corrected him at a meeting, so the city manager asked me to write an apology letter to him.
That was the one time I loved ChatGPT. It was bullshit that didn't need to happen that I didn't care about and achieved nothing, so I let the fucking bot write it.