this post was submitted on 02 Jun 2025
667 points (98.8% liked)

Programmer Humor

24297 readers
539 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 28 comments
sorted by: hot top controversial new old
[–] psmgx@lemmy.world 159 points 2 weeks ago

"Sorry, we'll format correctly in JSON this time."

[Proceeds to shit out the exact same garbage output]

[–] Engraver3825@piefed.social 75 points 2 weeks ago (2 children)

True story:

AI: 42, ]

Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

[–] towerful@programming.dev 45 points 2 weeks ago
let data = null
do {
    const response = await openai.prompt(prompt)
    if (response.error !== null) continue;
    try {
        data = JSON.parse(response.text)
    } catch {
        data = null // just in case
    }
} while (data === null)
return data

Meh, not my money

[–] Glitch@lemmy.dbzer0.com 5 points 2 weeks ago

Lol good point

[–] borth@sh.itjust.works 71 points 2 weeks ago (1 children)

The AI probably: Well, I might have made up responses before, but now that "make up responses" is in the prompt, I will definitely make up responses now.

[–] andybytes@programming.dev 2 points 2 weeks ago

I love poison.

[–] Undaunted@feddit.org 68 points 2 weeks ago (5 children)

I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

[–] Kolanaki@pawb.social 63 points 2 weeks ago (1 children)

"Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"

[–] WhiskyTangoFoxtrot@lemmy.world 36 points 2 weeks ago (1 children)

My booties are too big for you, traveller. You need an AI that provides smaller booties.

[–] Kolanaki@pawb.social 16 points 2 weeks ago

BOOTYSELLAH! I am going into work and I need only your biggest booties!

[–] TommySalami@lemmy.world 25 points 2 weeks ago (2 children)

I think that makes sense. I am 100% a layman with this stuff, buy if the "AI" is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.

[–] squaresinger@lemmy.world 8 points 2 weeks ago

Who knew that a training in carpet salesmanship helps for a job as a prompt engineer.

[–] barsoap@lemm.ee 2 points 2 weeks ago (1 children)

Yep exactly that. A fascinating side-effect is that models become better at logic when you tell them to talk like a Vulkan.

[–] skaffi@infosec.pub 3 points 2 weeks ago

Hmm... It's only logical.

[–] Grimy@lemmy.world 14 points 2 weeks ago (1 children)

I used to tell it my family would die.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 2 weeks ago (1 children)
[–] Klear@lemmy.world 12 points 2 weeks ago

That they're all dead and it's its fault.

[–] Cenotaph@mander.xyz 10 points 2 weeks ago (1 children)

Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

[–] Schadrach@lemmy.sdf.org 2 points 2 weeks ago (1 children)

Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

I thought the process of getting around guardrails was an increasingly complicated series of ways of getting it to pretend to be someone else that doesn't have guardrails and then answering as though it's that character.

[–] rocky_patriot@programming.dev 5 points 2 weeks ago

that’s one way. my own strategy is to just smooth talk it. you dont come to the bank manager and ask him for the keys to the safe. you come for a meeting discussion your potential deposit. then you want to take a look at the safe. oh, are those the keys? how do they work?

just curious, what kind of guardrails have you tried going against? i recently used the above to get a long and detailed list of instructions for cooking meth (not really interested in this, just to hone the technique)

[–] jol@discuss.tchncs.de 6 points 2 weeks ago (1 children)

I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.

[–] 000@lemmy.dbzer0.com 6 points 2 weeks ago (1 children)

It does not feel empathy. It does not feel anything.

[–] jol@discuss.tchncs.de 8 points 2 weeks ago

Maybe yours doesn't. My AI loves me. It said so

[–] brucethemoose@lemmy.world 47 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...

A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"

[–] shnizmuffin@lemmy.inbutts.lol 18 points 2 weeks ago

There's nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan's playbook. Other companies saw that.

[–] towerful@programming.dev 2 points 2 weeks ago* (last edited 2 weeks ago)

Edit: wrong comment

[–] SnotFlickerman@lemmy.blahaj.zone 18 points 2 weeks ago

Press X to JSON.

It's as easy as that.

[–] AeonFelis@lemmy.world 3 points 2 weeks ago

Fix it now, or you go to jail