this post was submitted on 15 Jul 2025
344 points (94.3% liked)
Fediverse
35431 readers
653 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.
Just using the "information" it regurgitates isn't very useful, which is why I didn't recommend doing that. Whether the information summarized by Wikipedia and ChatGPT is accurate really isn't important, you use those tools to find primary sources.
I’d argue that it’s very important, especially since more and more people are using it. Wikipedia is generally correct and people, myself included, edit incorrect things. ChatGPT is a black box and there’s no user feedback. It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle. After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.
Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet. This is the use case I dislike the most.
From what I can tell, running an LLM isn't really all that energy intensive, it's the training that takes loads of energy. And it's not like regular searches don't use loads of energy to initially index web results.
And this also ignores the gap between having a question, and knowing how to search for the answer. You might not even know where to start. Maybe you can search a vague question, but you're essentially hoping that somewhere in the first few results is a relevant discussion to get you on the right path. GPT, I find, is more efficient for getting from vague questions to more directed queries.
I find this attitude much more troubling than responsible LLM use. You should not be trusting tertiary sources, no matter how good their track record, you should be checking the sources used by Wikipedia too. You should always be checking your sources.
That's beyond the scope of my argument, and not really much worse than pasting directly from any tertiary source.