this post was submitted on 25 Nov 2025
769 points (98.9% liked)

Programmer Humor

27490 readers
1620 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] entwine@programming.dev 4 points 19 hours ago (1 children)

I'm not sure I understand what you're saying. By "the commenter" do you mean the human or the AI in the screenshot?

Also,

For instance, many SoTA models are trained using reinforcement learning, so it’s plausible that its learned that spamming meaningless tokens can delay negative reward

What's a "negative reward"? You mean a penalty? First of all, I don't believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.

But even if it wasn't, and it did in fact generate a bunch of garbage that didn't print out in the Claude UI, and the explanation of "simulated progress" was the AI model coming up with a plausible explanation for the garbage tokens, it still does not make it sentient (or even close).

[–] Tetragrade@leminal.space 1 points 14 hours ago* (last edited 14 hours ago)

I’m not sure I understand what you’re saying. By “the commenter”

I was talking about you, but not /srs, that was an attempt @ satire. I'm dismissing the results by appealing to the fact that there's a process.

negative reward

Reward is an AI maths term. It's the value according to which the neurons are updated, similar to "loss" or "error", if you've heard those.

I don’t believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.

Yes this is also possible, it depends on minute details of the training set, which we don't know.

Edit: As I understand, these models are trained in multiple modes, one where they're trying to predict text (supervised learning), but there are also others where it's given a prompt, and the response is sent to another system to be graded i.e. for factual accuracy. It could learn to identify which "training mode" it's in and behave differently. Although, I'm sure the ML guys have already thought of that & tried to prevent it.

it still does not make it sentient (or even close).

I agree, noted this in my comment. Just saying, this isn't evidence either way.