this post was submitted on 07 Dec 2025
784 points (97.8% liked)

Technology

77090 readers
3049 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

you are viewing a single comment's thread
view the rest of the comments
[–] raspberriesareyummy@lemmy.world 59 points 18 hours ago (6 children)

So there's actual developers who could tell you from the start that LLMs are useless for coding, and then there's this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling... And for wasting energy and water.

[–] psycotica0@lemmy.ca 92 points 16 hours ago (3 children)

I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.

So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.

Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.

[–] 5too@lemmy.world 39 points 13 hours ago

And not only did he see for himself, he wrote up and published his results.

[–] bassomitron@lemmy.world 32 points 15 hours ago (1 children)

100% this. The guy was literally a consultant and a developer. It'd just be bad business for him to outright dismiss AI without having actual hands on experience with said product. Clients want that type of experience and knowledge when paying a business to give them advice and develop a product for them.

[–] khepri@lemmy.world 24 points 17 hours ago (2 children)

They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it's 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don't have a time-saving solution for those common tasks already in place? As with anything LLM, it's decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

[–] lambdabeta@lemmy.ca 22 points 16 hours ago

Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.

Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.

[–] raspberriesareyummy@lemmy.world 5 points 12 hours ago

As you said, "boilerplate" code can be script generated - and there are IDEs that already do this, but in a deterministic way, so that you don't have to proof-read every single line to avoid catastrophic security or crash flaws.

[–] InvalidName2@lemmy.zip 17 points 17 hours ago (2 children)

And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that's not already covered by your framework, etc. That it's not able to completely write 100% of your codebase perfectly from the get-go does not mean it's entirely useless.

[–] Soggy@lemmy.world 28 points 16 hours ago (1 children)

Other than that it's work that junior coders could be doing, to develop the next generation of actual good developers.

[–] SreudianFlip@sh.itjust.works 15 points 16 hours ago* (last edited 16 hours ago) (2 children)

Yes, and that's exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

If you have no junior developers, who will turn into senior developers later on?

[–] pinball_wizard@lemmy.zip 6 points 16 hours ago

If you have no junior developers, who will turn into senior developers later on?

At least it isn't my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j... I can just keep enjoying today's version of the Internet, unchanged.

[–] MisterOwl@lemmy.world 2 points 14 hours ago (1 children)
[–] SreudianFlip@sh.itjust.works 2 points 10 hours ago

Al is a pretty good guy but he can't be everywhere. Maybe he can use some A.I. to help!

[–] Randelung@lemmy.world 8 points 18 hours ago (1 children)

Maybe they'll listen to one of their own?

[–] raspberriesareyummy@lemmy.world -3 points 12 hours ago

The kind of useful article I would expect then is one exlaining why word prediction != AI

[–] jali67@lemmy.zip 3 points 16 hours ago* (last edited 16 hours ago)

Don’t worry. The people on LinkedIn and tech executives tell us it will transform everything soon!

[–] ImmersiveMatthew@sh.itjust.works -3 points 15 hours ago (1 children)

I really have not found AI to be useless for coding. I have found it extremely useful and it has saved me hundreds of hours. It is not without its faults or frustrations, but the it really is a tool I would not want to be without.