this post was submitted on 01 May 2025
65 points (77.3% liked)
Technology
69772 readers
3662 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Generative AI suffers from inaccuracy; text AI generators making up believable lies if it doesnt have enough information
The idea of generative AI isn't accuracy, so that's pretty expected.
Generative AI is designed to be used with a content base and expand on information, not to create new information. You can feed generative AI with the entirety of the current Wikipedia text source and have it expand on subjects which need it, and curtail and simplify other subjects which need it.
You don't ask generative AI to come up with new information--that's how you get inaccurate information.
Let's not anthropomorphize AI. It doesn't lie. It uses available data to expand on a subject to make it conversationally complete when it lacks sufficient information on a subject, regardless of whether or not the context is correct. That's completely different, and you can specifically prohibit an AI from doing that...
AI is great when used appropriately. The issue is that people are using AI as a Google replacement, something it's not designed to do. AI isn't a fact engine. LLMs are designed to as closely resemble human speech as possible, not to give correct information to questions. People's issue with AI is that they're fucking using it wrong.
This is an exceptionally great usage of AI because you already have the required factual background knowledge. You can simply feed it to your AI telling it not to fill in any gaps and to rewrite articles to be more uniform and to have direct and easy to consume verbiage. This instance is quite literally what generative AI was designed for....to use factual knowledge and to generate context around the existing data.
Issues arise when you use AI for things other than what it was intended, and you don't give it enough information and it has to generate information to complete datasets. AI will do what you ask, you just have to know how to ask it. That's why AI prompt engineers are a thing.
I still fear that mistakes may slip through, but those can be spotted if multiple people check the text
There are wikipedia pages that are really obscure(especially from pages that are not in english) and that nobody would probably check to verify its correct
Exactly. At work, my team kinda sucks at communication but great w/ facts (we're engineers, go figure), so they use gen AI to turn facts into nicer-to-read documentation and communication (i.e. personal reviews, emails, documentation, etc). The process is relatively smooth:
For that task, it works pretty well.