this post was submitted on 27 May 2025
1940 points (99.5% liked)

Programmer Humor

23531 readers
1683 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 6 points 2 days ago (1 children)

No, I'm sure you're wrong. There's a certain cheerful confidence that you get from every LLM response. It's this upbeat "can do attitude" brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it's often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.

I don't think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it's prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.

Similarly, it's not like it actually has confidence normally. It's just been trained / meta-prompted to emit an answer in a style that mimics confidence.

[–] locuester@lemmy.zip 2 points 2 days ago* (last edited 2 days ago) (1 children)

ChatGPT went through a phase of overly bubbly upbeat responses, they chilled it out tho. Not sure if that’s what you saw.

One thing is for sure with all of them, they never say “I don’t know” because such responses aren’t likely to be found in any training data!

It’s probably part of some system level prompt guidance too, like you say, to be confident.

[–] merc@sh.itjust.works 1 points 2 days ago

I think "I don't know" might sometimes be found in the training data. But, I'm sure they optimize the meta-prompts so that it never shows up in a response to people. While it might be the "honest" answer a lot of the time, the makers of these LLMs seem to believe that people would prefer confident bullshit that's wrong over "I don't know".