this post was submitted on 26 Oct 2025
219 points (99.5% liked)

science

22278 readers
1127 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] obsoleteacct@lemmy.zip 3 points 3 days ago (5 children)

Are the down votes because people genuinely think this is an incorrect answer, or because they dislike anything remotely pro-AI?

[–] CatsPajamas@lemmy.dbzer0.com 2 points 3 days ago (4 children)

Both probably. Thought terminating cliches and all that. The most useful tool maybe ever. Wild.

[–] entropicdrift@lemmy.sdf.org 5 points 3 days ago* (last edited 3 days ago) (3 children)

I use LLMs daily as a professional software engineer. I didn't downvote you and I'm not disengaging my thinking here. RAGs don't solve everything, and it's better not to sacrifice scientific credibility to the altar of convenience.

It's always been easier to lie quickly than to dig for the truth. AIs are not consistent, regardless of the additional appendages you give them. They have no internal consistency by their very nature.

[–] CatsPajamas@lemmy.dbzer0.com 1 points 2 days ago

What would the failure rate on this be? What would the rate have to be to actually matter? Literally it would just poll the abstract and spit out yes no undecided. That is in the abstract. There is very little chance of there being any hallucinations that are meaningful at a degree large enough to vary literally anything.

Have you never had it organize things or analyze sentiments? I understand if that's not your use case but this is pretty fundamentally an easy application of AI.

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)