this post was submitted on 27 May 2025
1998 points (99.6% liked)

Programmer Humor

24912 readers
1267 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] chunes@lemmy.world -2 points 1 month ago (8 children)

Laugh it up while you can.

We're in the "haha it can't draw hands!" phase of coding.

[–] GreenKnight23@lemmy.world 8 points 1 month ago (6 children)

someone drank the koolaid.

LLMs will never code for two reasons.

one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.

software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.

two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.

the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.

so yeah. we're about 100 years from the whole "it can't draw its hands" stage because it doesn't even know what hands are.

[–] chunes@lemmy.world -2 points 1 month ago* (last edited 1 month ago) (5 children)

This is just your ego talking. You can't stand the idea that a computer could be better than you at something you devoted your life to. You're not special. Coding is not special. It happened to artists, chess players, etc. It'll happen to us too.

I'll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

[–] wischi@programming.dev 4 points 1 month ago* (last edited 1 month ago)

Coding isn't special you are right, but it's a thinking task and LLMs (including reasoning models) don't know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn't learn to think from that. That's why LLMs can't replace humans.

That does certainly not mean that software can't be smarter than humans. It will and it's just a matter of time, but to get there we likely have AGI first.

To show you that LLMs can't think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it "saw" the entire Wikipedia article on how xxo works during training, that it's a solved game, different strategies and how to consistently draw - but still it can't do it. It loses most games against my four year old niece and she doesn't even play good/perfect xxo.

I wouldn't trust anything, which is claimed to do thinking tasks, that can't even beat my niece in xxo, with writing firmware for cars or airplanes.

LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can't think. For now, but likely we'll need different architectures for real thinking models than LLMs have.

load more comments (4 replies)
load more comments (4 replies)
load more comments (5 replies)