this post was submitted on 27 May 2025
1852 points (99.5% liked)

Programmer Humor

23497 readers
1131 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] coherent_domain@infosec.pub 140 points 1 day ago* (last edited 1 day ago) (10 children)

The image is taken from Zhihu, a Chinese Quora-like site.

The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.

But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...

[–] 30p87@feddit.org 68 points 1 day ago (8 children)

I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.

Sounds like it's perfectly replicated the help forums it was trained on.

[–] bjoern_tantau@swg-empire.de 16 points 1 day ago (1 children)

I used ChatGPT to help me make a package with SUSE's Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn't do anything. So I had to use my own brain, ugh.

load more comments (1 replies)
load more comments (6 replies)
[–] MudMan@fedia.io 19 points 1 day ago (6 children)

It's pretty random in terms of what is or isn't doable.

For me it's a big performance booster because I genuinely suck at coding and don't do too much complex stuff. As a "clean up my syntax" and a "what am I missing here" tool it helps, or at least helps in figuring out what I'm doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn't do without one (or at least without getting berated by some online dick who doesn't think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

How much of a benefit it's for a professional I couldn't tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

I don't think it's useless, but if you ask it to do something by itself you can't trust that it'll work without singificant additional effort.

[–] ocean@lemmy.selfhostcat.com 13 points 1 day ago (1 children)

A lot of words to just say vibe coding

load more comments (1 replies)
load more comments (5 replies)
[–] Cethin@lemmy.zip 9 points 1 day ago

I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.

load more comments (7 replies)
[–] Irelephant@lemm.ee 90 points 1 day ago (2 children)

Ai code is specifically annoying because it looks like it would work, but its just plausible bullshit.

[–] kkj@lemmy.dbzer0.com 29 points 17 hours ago (3 children)

And that's what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it's imitating, but with zero understanding of why the original looked that way.

load more comments (3 replies)
[–] captain_aggravated@sh.itjust.works 25 points 23 hours ago (2 children)

Well I've got the name for my autobiography now.

load more comments (2 replies)
[–] jubilationtcornpone@sh.itjust.works 71 points 1 day ago (1 children)
[–] Saleh@feddit.org 48 points 1 day ago (2 children)

My uncle. Very smart very neuronal. He knows the entire Internet, can you imagine? the entire internet. Like the mails of Crooked Hillary Clinton, that crook. You know what stands in that Mails? my uncle knows. He makes the best code. The most beautiful code. No one has ever seen code like it, but for him, he's a genius, like i am, i have inherited all his genius genes. It is very easy. He makes the best code. Sometimes he calls me and asks me: you are even smarter than i am. Can you look at my code?

[–] cbazero@programming.dev 22 points 1 day ago

All people say it. Tremendous code. All the experts said "No, generating formatted random text is not working code" but we did it.

load more comments (1 replies)
[–] haui_lemmy@lemmy.giftedmc.com 67 points 1 day ago (1 children)

Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.

[–] andybytes@programming.dev 16 points 1 day ago

Awwwww snap look at this limp dick future we got going on here.

[–] pennomi@lemmy.world 67 points 1 day ago (7 children)

To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

[–] Boomkop3@reddthat.com 36 points 1 day ago (5 children)

You managed to get an ai to do 200 lines of code and it actually compiled?

[–] pennomi@lemmy.world 27 points 1 day ago* (last edited 1 day ago) (3 children)

Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

[–] Maalus@lemmy.world 14 points 1 day ago (3 children)

I recently tried it for scripting simple things in python for a game. Yaknow, change char's color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

It all based on "Misc.changeHue(player)". A function that doesn't exist and never has, because the game is unable to color other mobs / players like that for scripting.

Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with "yes this can totally be done, this is how" and "how" doesn't work, and after sifting forums / asking devs you find out "sadly that's impossible" or "we dont actually use cpython so libraries don't work like that" etc.

load more comments (3 replies)
[–] Boomkop3@reddthat.com 9 points 1 day ago (1 children)

You must be a big fan of boilerplate

load more comments (1 replies)
[–] GreenMartian@lemmy.dbzer0.com 9 points 1 day ago (3 children)

They have been pretty good on popular technologies like python & web development.

I tried to do Kotlin for Android, and they kept tripping over themselves; it's hilarious and frustrating at the same time.

load more comments (3 replies)
load more comments (4 replies)
[–] wischi@programming.dev 12 points 1 day ago* (last edited 17 hours ago) (11 children)

Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn't trust her writing production code 🤣

Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".

load more comments (11 replies)
load more comments (5 replies)
[–] LanguageIsCool@lemmy.world 42 points 1 day ago (1 children)

I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

[–] MonkeMischief@lemmy.today 12 points 1 day ago (1 children)

It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we'll get there!

...but it won't be that impressive once we remember concepts like "monkey, typing, Shakespeare" were already embedded in the training data.

load more comments (1 replies)
[–] ItsMeForRealNow@lemmy.world 42 points 1 day ago (4 children)

This has beeny experience as well. It keeps emphasizing "beauty" and keeps missing "correctness"

[–] match@pawb.social 39 points 1 day ago (1 children)

llms are systems that output human-readable natural language answers, not true answers

load more comments (1 replies)
[–] zurohki@aussie.zone 11 points 1 day ago (1 children)

It generates an answer that looks correct. Actual correctness is accidental. That's how you wind up with documents with references that don't exist, it just knows what references look like.

[–] spankmonkey@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (19 children)

It doesn't 'know' anything. It is glorified text autocomplete.

The current AI is intelligent like how Hoverboards hover.

load more comments (19 replies)
load more comments (2 replies)
[–] ZombiFrancis@sh.itjust.works 33 points 1 day ago

Ctrl+A + Del.

So clean.

[–] 1984@lemmy.today 31 points 1 day ago* (last edited 1 day ago) (3 children)

Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that....

[–] merc@sh.itjust.works 21 points 1 day ago (2 children)

It confidently gave me one

IMO, that's one of the biggest "sins" of the current LLMs, they're trained to generate words that make them sound confident.

[–] KairuByte@lemmy.dbzer0.com 9 points 1 day ago (4 children)

They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

load more comments (4 replies)
load more comments (1 replies)
[–] fibojoly@sh.itjust.works 16 points 1 day ago

Man, I can't wait to try out generative AI to generate config files for mission critical stuff! Imagine paying all of us devops wankers when my idiot boss can just ask Chat GPT to sort all this legacy mess we're juggling with on the daily!

load more comments (1 replies)
[–] markstos@lemmy.world 18 points 1 day ago (1 children)

This weekend I successfully used Claude to add three features in a Rust utility I had wanted for a couple years. I had opened issue requests, but no else volunteered. I had tried learning Rust, Wayland and GTK to do it myself, but the docs at the time weren’t great and the learning curve was steep. But Claude figured it all out pretty quick.

[–] Tamo240@programming.dev 9 points 1 day ago (5 children)

Did the generated code get merged? I'd be curious to see the PRs

load more comments (5 replies)
[–] dumnezero@piefed.social 18 points 1 day ago (2 children)

Try to get one of these LLMs to update a package.json.

load more comments (2 replies)
[–] TheReturnOfPEB@reddthat.com 17 points 1 day ago* (last edited 1 day ago) (1 children)

I'm pretty sure that is how we got CORBA

now just make it construct UML models and then abandon this and move onto version 2

[–] GreenMartian@lemmy.dbzer0.com 10 points 1 day ago

Hello, fellow old person 🤝

[–] DrunkAnRoot@sh.itjust.works 13 points 1 day ago

cant wait to see "we use AI agents to generate well structured non-functioning code" with off centered everything and non working embeds on the website

[–] TheGiantKorean@lemmy.world 12 points 1 day ago

Did it try to blackmail him if he didn't use the new code?

Context

[–] sturger@sh.itjust.works 10 points 1 day ago (10 children)

Honest question: I haven't used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don't mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn't change them, only the correct versions.

[–] barsoap@lemm.ee 23 points 1 day ago

Not reliably, no. Python is too dynamic to do that kind of thing without solving general program equivalence which is undecidable.

Use a static language, problem solved.

[–] trolololol@lemmy.world 18 points 1 day ago

I'm going to laugh in Java, where this has always been possible and reliable. Not like ai reliable, but expert reliable. Because of static types.

[–] bitwolf@sh.itjust.works 13 points 1 day ago (1 children)

For the most part "Rename symbol" in VSCode will work well. But it's limited by scope.

load more comments (1 replies)
[–] lapping6596@lemmy.world 11 points 1 day ago (2 children)

I use pycharm for this and in general it does a great job. At work we've got some massive repos and it'll handle it fine.

The "find" tab shows where it'll make changes and you can click "don't change anything in this directory"

load more comments (2 replies)
[–] derpgon@programming.dev 10 points 23 hours ago

IntelliJ IDEA, if it knows it is the same variable, it will rename it. Usually works in a non fucked up codebase that uses eval or some obscure constructs like saving a variable name into a variable as a string and dynamically invoking it.

load more comments (5 replies)
load more comments
view more: next ›