this post was submitted on 24 Oct 2025
196 points (100.0% liked)

science

22278 readers
1127 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
 

Relatively new arXiv preprint that got featured on Nature News, I slightly adjusted the title to be less technical. The discovery was done using aggregated online Q&A... one of the funnier sources being 2000 popular questions from r/AmITheAsshole that were rated YTA by the most upvoted response. Study seems robust, and they even did several-hundred participants trials with real humans.

A separate preprint measured sycophancy across various LLMs in a math competition-context (https://arxiv.org/pdf/2510.04721), where apparently GPT-5 was the least sycophantic (+29.0), and DeepSeek-V3.1 was the most (+70.2)

The Nature News report (which I find a bit too biased towards researchers): https://www.nature.com/articles/d41586-025-03390-0

you are viewing a single comment's thread
view the rest of the comments
[–] UnderpantsWeevil@lemmy.world 40 points 6 days ago (8 children)

Honestly, one of the more annoying aspects of AI is when you get certain information, verify it, find out the information is inaccurate, go back to the AI to say "This seems wrong, can you clarify?" and have it respond "OMG, yes! You're such a smart little boy for catching my error! Here's some more information that may or may not be correct, good luck!"

I'm not even upset that it gave me the wrong answer the first time. I'm annoyed that it tries to patronize me for correcting its own mistakes. Feels like I'm talking to a 1st grade teacher who is trying to turn "I fucked up" into "You passed my test because you're so smart".