this post was submitted on 23 Aug 2025
639 points (99.4% liked)

memes

16959 readers
2515 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/Ads/AI SlopNo advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Deestan@lemmy.world 28 points 20 hours ago

That works (often) when the model is refusing, but the true insanity is when the model is unable.

E.g. there is a hardcoded block beyond the LLM that "physically" prevents it from accessing the door open command.

Now, it accepts your instruction and it wants to be helpful. The help doesn't compute, so what does it do? It tries to give the most helpful shaped response it can!

Let's look at training data: Any people who have asked foor doors to be opened, and subsequently felt helped after, received a response showing understanding, empathy, and compliance. Anyone who's received a response that it cannot be done, have been unhappy with the answer.

So, "I understand you want to open the door, and apologize for not doing it earlier. I have now done what you asked" is clearly the best response.