this post was submitted on 27 May 2025
1364 points (99.6% liked)

Programmer Humor

23468 readers
1807 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] coherent_domain@infosec.pub 126 points 13 hours ago* (last edited 10 hours ago) (10 children)

The image is taken from Zhihu, a Chinese Quora-like site.

The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.

But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...

[–] 30p87@feddit.org 58 points 12 hours ago (8 children)

I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.

[–] purplemonkeymad@programming.dev 41 points 9 hours ago

Sounds like it's perfectly replicated the help forums it was trained on.

[–] bjoern_tantau@swg-empire.de 14 points 10 hours ago (1 children)

I used ChatGPT to help me make a package with SUSE's Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn't do anything. So I had to use my own brain, ugh.

[–] noctivius@lemm.ee 5 points 6 hours ago

chatgpt is worse among biggest chatbots with writing codes. From my experience Deepseek > Perplexity > Gemini > Claude.

load more comments (6 replies)
[–] MudMan@fedia.io 18 points 11 hours ago (4 children)

It's pretty random in terms of what is or isn't doable.

For me it's a big performance booster because I genuinely suck at coding and don't do too much complex stuff. As a "clean up my syntax" and a "what am I missing here" tool it helps, or at least helps in figuring out what I'm doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn't do without one (or at least without getting berated by some online dick who doesn't think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

How much of a benefit it's for a professional I couldn't tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

I don't think it's useless, but if you ask it to do something by itself you can't trust that it'll work without singificant additional effort.

[–] ocean@lemmy.selfhostcat.com 13 points 10 hours ago (1 children)

A lot of words to just say vibe coding

load more comments (1 replies)
[–] wise_pancake@lemmy.ca 6 points 8 hours ago (2 children)

It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

It’s very useful for throwaway work like writing scripts and automations.

It’s useful not but a 10x multiplier like all the CEOs claim it is.

load more comments (2 replies)
load more comments (2 replies)
[–] Cethin@lemmy.zip 7 points 8 hours ago

I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.

load more comments (7 replies)
[–] jubilationtcornpone@sh.itjust.works 65 points 12 hours ago (1 children)
[–] Saleh@feddit.org 39 points 11 hours ago (2 children)

My uncle. Very smart very neuronal. He knows the entire Internet, can you imagine? the entire internet. Like the mails of Crooked Hillary Clinton, that crook. You know what stands in that Mails? my uncle knows. He makes the best code. The most beautiful code. No one has ever seen code like it, but for him, he's a genius, like i am, i have inherited all his genius genes. It is very easy. He makes the best code. Sometimes he calls me and asks me: you are even smarter than i am. Can you look at my code?

[–] cbazero@programming.dev 19 points 10 hours ago

All people say it. Tremendous code. All the experts said "No, generating formatted random text is not working code" but we did it.

[–] AtariDump@lemmy.world 5 points 7 hours ago

Thanks, I hate it.

[–] pennomi@lemmy.world 56 points 8 hours ago (3 children)

To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

[–] Boomkop3@reddthat.com 30 points 7 hours ago (4 children)

You managed to get an ai to do 200 lines of code and it actually compiled?

[–] pennomi@lemmy.world 15 points 7 hours ago* (last edited 7 hours ago) (3 children)

Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

[–] Maalus@lemmy.world 9 points 6 hours ago (2 children)

I recently tried it for scripting simple things in python for a game. Yaknow, change char's color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

It all based on "Misc.changeHue(player)". A function that doesn't exist and never has, because the game is unable to color other mobs / players like that for scripting.

Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with "yes this can totally be done, this is how" and "how" doesn't work, and after sifting forums / asking devs you find out "sadly that's impossible" or "we dont actually use cpython so libraries don't work like that" etc.

load more comments (2 replies)
[–] Boomkop3@reddthat.com 6 points 5 hours ago (1 children)

You must be a big fan of boilerplate

load more comments (1 replies)
[–] GreenMartian@lemmy.dbzer0.com 6 points 6 hours ago (2 children)

They have been pretty good on popular technologies like python & web development.

I tried to do Kotlin for Android, and they kept tripping over themselves; it's hilarious and frustrating at the same time.

load more comments (2 replies)
load more comments (3 replies)
[–] wischi@programming.dev 9 points 6 hours ago (9 children)

Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four years old niece and I wouldn't trust her writing production code 🤣

Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".

load more comments (9 replies)
[–] Opisek@lemmy.world 6 points 5 hours ago (1 children)

Perhaps 5 LOC. Maybe 3. And even then I'll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

load more comments (1 replies)
[–] haui_lemmy@lemmy.giftedmc.com 55 points 8 hours ago (1 children)

Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.

[–] andybytes@programming.dev 12 points 7 hours ago

Awwwww snap look at this limp dick future we got going on here.

[–] ItsMeForRealNow@lemmy.world 41 points 12 hours ago (3 children)

This has beeny experience as well. It keeps emphasizing "beauty" and keeps missing "correctness"

[–] match@pawb.social 35 points 12 hours ago (1 children)

llms are systems that output human-readable natural language answers, not true answers

load more comments (1 replies)
[–] zurohki@aussie.zone 11 points 9 hours ago (1 children)

It generates an answer that looks correct. Actual correctness is accidental. That's how you wind up with documents with references that don't exist, it just knows what references look like.

[–] spankmonkey@lemmy.world 9 points 9 hours ago* (last edited 9 hours ago) (15 children)

It doesn't 'know' anything. It is glorified text autocomplete.

The current AI is intelligent like how Hoverboards hover.

[–] endeavor@sopuli.xyz 7 points 8 hours ago* (last edited 8 hours ago)

Llms are the smartest thing ever on subjects you have no fucking clue on. On subjects you have at least 1 year experience with it suddenly becomes the dumbest shit youve ever seen.

load more comments (14 replies)
load more comments (1 replies)
[–] ZombiFrancis@sh.itjust.works 22 points 5 hours ago

Ctrl+A + Del.

So clean.

[–] LanguageIsCool@lemmy.world 19 points 5 hours ago (2 children)

I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

load more comments (2 replies)
[–] dumnezero@piefed.social 18 points 11 hours ago (2 children)

Try to get one of these LLMs to update a package.json.

[–] TrickDacy@lemmy.world 5 points 9 hours ago

Define "update"

load more comments (1 replies)
[–] TheReturnOfPEB@reddthat.com 16 points 6 hours ago* (last edited 6 hours ago) (1 children)

I'm pretty sure that is how we got CORBA

now just make it construct UML models and then abandon this and move onto version 2

[–] GreenMartian@lemmy.dbzer0.com 9 points 6 hours ago

Hello, fellow old person 🤝

[–] markstos@lemmy.world 15 points 8 hours ago (1 children)

This weekend I successfully used Claude to add three features in a Rust utility I had wanted for a couple years. I had opened issue requests, but no else volunteered. I had tried learning Rust, Wayland and GTK to do it myself, but the docs at the time weren’t great and the learning curve was steep. But Claude figured it all out pretty quick.

[–] Tamo240@programming.dev 8 points 6 hours ago (4 children)

Did the generated code get merged? I'd be curious to see the PRs

load more comments (4 replies)
[–] 1984@lemmy.today 14 points 4 hours ago* (last edited 4 hours ago) (1 children)

Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

[–] fibojoly@sh.itjust.works 11 points 4 hours ago

Man, I can't wait to try out generative AI to generate config files for mission critical stuff! Imagine paying all of us devops wankers when my idiot boss can just ask Chat GPT to sort all this legacy mess we're juggling with on the daily!

[–] DrunkAnRoot@sh.itjust.works 11 points 5 hours ago

cant wait to see "we use AI agents to generate well structured non-functioning code" with off centered everything and non working embeds on the website

[–] TheGiantKorean@lemmy.world 10 points 7 hours ago

Did it try to blackmail him if he didn't use the new code?

Context

[–] Irelephant@lemm.ee 7 points 24 minutes ago

Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

load more comments
view more: next ›