this post was submitted on 27 May 2025
1854 points (99.5% liked)

Programmer Humor

23497 readers
1131 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] coherent_domain@infosec.pub 140 points 1 day ago* (last edited 1 day ago) (4 children)

The image is taken from Zhihu, a Chinese Quora-like site.

The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn't seem to reflect the text.

But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour...

[–] 30p87@feddit.org 68 points 1 day ago (6 children)

I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.

Sounds like it's perfectly replicated the help forums it was trained on.

[–] bjoern_tantau@swg-empire.de 16 points 1 day ago (1 children)

I used ChatGPT to help me make a package with SUSE's Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn't do anything. So I had to use my own brain, ugh.

[–] noctivius@lemm.ee 6 points 1 day ago

chatgpt is worse among biggest chatbots with writing codes. From my experience Deepseek > Perplexity > Gemini > Claude.

[–] scrubbles@poptalk.scrubbles.tech 5 points 1 day ago (1 children)

Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it's not the correct concept to begin with

[–] formulaBonk@lemm.ee 5 points 1 day ago

That’s exactly what most junior devs do when stuck. They rehash the same solution over and over and it almost seems like that llms trained on code bases infer that behavior from commit histories etc.

It almost feels like on of those “we taught him these tasks incorrectly as a joke” scenarios

[–] LucidLyes@lemmy.world 3 points 1 day ago

That's what tends to happen

[–] qqq@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging

[–] 30p87@feddit.org 1 points 1 day ago

The latter is what I tried, and also kinda wanted ChatGPT to do, which it refused

[–] wise_pancake@lemmy.ca 2 points 1 day ago

Did it at least try puts?

[–] MudMan@fedia.io 19 points 1 day ago (3 children)

It's pretty random in terms of what is or isn't doable.

For me it's a big performance booster because I genuinely suck at coding and don't do too much complex stuff. As a "clean up my syntax" and a "what am I missing here" tool it helps, or at least helps in figuring out what I'm doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn't do without one (or at least without getting berated by some online dick who doesn't think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

How much of a benefit it's for a professional I couldn't tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

I don't think it's useless, but if you ask it to do something by itself you can't trust that it'll work without singificant additional effort.

[–] ocean@lemmy.selfhostcat.com 13 points 1 day ago (1 children)

A lot of words to just say vibe coding

[–] MudMan@fedia.io -1 points 1 day ago

Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the "vibe coding" name we moved to the assumption that all AI assistance is vibe coding, but in practice there's the percentage of what you do that you know how to do, the percentage you vibe code because you can't figure it out yourself off the top of your head and the percentage you just can't do without researching because the LLM can't do it effectively or the stuff it can do is too crappy to use as part of something else.

I think if the assumption is you should just "git gud" and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there's a whole bunch of skill ranges where that is actually helpful for some stuff.

If you want to deny that there's a difference between that and just making code soup by asking a language model to build you entire pieces of software... well, then you're going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you're wrong and use that to pretend you're wrong about the whole thing.

This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that's a bad outcome.

[–] wise_pancake@lemmy.ca 8 points 1 day ago (1 children)

It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

It’s very useful for throwaway work like writing scripts and automations.

It’s useful not but a 10x multiplier like all the CEOs claim it is.

[–] MudMan@fedia.io 2 points 1 day ago (1 children)

Fully agreed. Everybody is betting it'll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn't any guarantee that it'll get to where the corpos are assuming it already is.

Which is not the same as not having better autocomplete/spellcheck/"hey, how do I format this specific thing" tools.

[–] wise_pancake@lemmy.ca 4 points 1 day ago

Yeah, it’s still super useful.

I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.

I get a bit more done faster, that’s a solid value proposition.

[–] vivendi@programming.dev 5 points 1 day ago (1 children)

It's not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts

[–] MudMan@fedia.io 5 points 1 day ago

Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn't have the memory of a goldfish.

And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I'm less concerned about hating the entire concept than I am about making sure they don't figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.

We've been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they'll enshittify the freely available alternatives they control and clamp down as much as possible.

[–] Cethin@lemmy.zip 9 points 1 day ago

I use it sometimes, usually just to create boilerplate. Actual functionality it's hit or miss, and often it ends up taking more time to fix than to write myself.

[–] TrickDacy@lemmy.world 3 points 1 day ago (2 children)

I wouldn't say it's accurate that this was a "mechanical" upgrade, having done it a few times. They even have a migration tool which you'd think could fully do the upgrade but out of the probably 4-5 projects I've upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms

[–] scrubbles@poptalk.scrubbles.tech 2 points 1 day ago (1 children)

No, still "perfect" for llms. There's nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it

[–] TrickDacy@lemmy.world 1 points 1 day ago

I searched this issue and didn't find anything very helpful. The new config format can be done many slightly different ways and there are a lot of variables in how your plugins and presets can be. It made perfect sense to me that the LLM couldn't do this upgrade for op. Since one tiny mistake and it won't work at all and usually gives a weird error.

[–] coherent_domain@infosec.pub 1 points 1 day ago* (last edited 1 day ago) (1 children)

Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don't have a lot of experience in TypeScript, besides one project in my college webdev class.

So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what "real programmers" would use it for...

People here say boilerplate code, but honestly I don't quite recall the last time I need to write a lot of boilerplate code.

I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.

[–] TrickDacy@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?

People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It's just a time saver. I have never used it like "write me a script or a function that does x" like some people do. I am not interested in that as it seems like a sad crutch that I'd need to customize so much anyway that I may as well skip that step.

Having said that, I'm noticing the copilot autocomplete seems to be getting worst over time. I'm not sure why it worsening, but if it ever feels not worth it anymore I'll drop it, no harm no foul. The binary thinkers tend to think you're either a good dev who despises all forms of AI or you're an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.

PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would've done all or almost all the work for you... It fully didn't work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.

[–] coherent_domain@infosec.pub 2 points 1 day ago* (last edited 1 day ago) (1 children)

Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.

I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.

Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.


I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.

So I just gived up and read the doc.

[–] TrickDacy@lemmy.world 2 points 1 day ago

Gotcha. No worries. I figured you were coming in good faith but wasn't certain. Who is pushing llm's for programming that hard? In my bubble, which often includes Lemmy, most people HATE them for all uses. I get that tech bros and linked in crazies probably push this tech for coding a lot, but outside of that, most devs I know IRL either are lukewarm or dislike llm's for dev work.