this post was submitted on 07 Dec 2025
785 points (97.8% liked)

Technology

77090 readers
3049 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

top 50 comments
sorted by: hot top controversial new old
[–] edgemaster72@lemmy.world 172 points 23 hours ago (7 children)

Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

And all they'll hear is "not failure, metrics great, ship faster, productive" and go against your advice because who cares about three months later, that's next quarter, line must go up now. I also found this bit funny:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me... I was proud of what I’d created.

Well you didn't create it, you said so yourself, not sure why you'd be proud, it's almost like the conclusion should've been blindingly obvious right there.

[–] AutistoMephisto@lemmy.world 84 points 23 hours ago (3 children)

The top comment on the article points that out.

It's an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It's a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I'll have to find it but there's a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don't know how to do the stuff that modern warplanes do automatically.

[–] logicbomb@lemmy.world 47 points 23 hours ago (1 children)

It's more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you're doomed. You might as well throw away the entire code base and start over.

And if you want an exact parallel, I've said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

load more comments (1 replies)
[–] drosophila@lemmy.blahaj.zone 17 points 17 hours ago* (last edited 17 hours ago)

The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven't lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won't make you forget how to write like using ChatGPT will.

I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren't good at using computers generally don't do this, and might not even know how you would start trying to.

For years 'user friendly' software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user's brain into the computer and hide the computer's internal state (so that its not implied that the user has to understand it, so that a user that doesn't know what they're doing won't do something they'll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every "smart" feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

Now, I am of the opinion that the 'mirroring the internal state' method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn't be accessible to people with different levels of ability. But just as a random person in a store shouldn't grab a wheelchair user's chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to 'user friendliness'. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.

load more comments (1 replies)
load more comments (6 replies)
[–] ignirtoq@feddit.online 117 points 23 hours ago (4 children)

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

Except we are talking about that, and the tech bro response is "in 10 years we'll have AGI and it will do all these things all the time permanently." In their roadmap, there won't be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

What's most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

"Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."

[–] grue@lemmy.world 55 points 22 hours ago

That's why they're all-in on authoritarianism.

[–] UnspecificGravity@piefed.social 20 points 20 hours ago

Yep, and now you know why all the tech companies suddenly became VERY politically active. This future isn't compatible with democracy. Once these companies no longer provide employment their benefit to society becomes a big fat question mark.

[–] HasturInYellow@lemmy.world 20 points 21 hours ago* (last edited 20 hours ago) (3 children)

According to a study, the ~~lower~~ top 10% accounts for something like 68% of cash flow in the economy. Us plebs are being cut out all together.

That being said, I think if people can't afford to eat, things might bet bad. We will probably end up a kept population in these ghouls fever dreams.

Edit: I'm an idiot.

load more comments (3 replies)
load more comments (1 replies)
[–] raspberriesareyummy@lemmy.world 59 points 18 hours ago (13 children)

So there's actual developers who could tell you from the start that LLMs are useless for coding, and then there's this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling... And for wasting energy and water.

[–] psycotica0@lemmy.ca 92 points 16 hours ago (4 children)

I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.

So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.

Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.

[–] 5too@lemmy.world 39 points 13 hours ago

And not only did he see for himself, he wrote up and published his results.

[–] bassomitron@lemmy.world 32 points 15 hours ago (3 children)

100% this. The guy was literally a consultant and a developer. It'd just be bad business for him to outright dismiss AI without having actual hands on experience with said product. Clients want that type of experience and knowledge when paying a business to give them advice and develop a product for them.

load more comments (3 replies)
load more comments (2 replies)
[–] khepri@lemmy.world 24 points 17 hours ago (2 children)

They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it's 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don't have a time-saving solution for those common tasks already in place? As with anything LLM, it's decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

[–] lambdabeta@lemmy.ca 22 points 16 hours ago

Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.

Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.

load more comments (1 replies)
[–] InvalidName2@lemmy.zip 17 points 17 hours ago (5 children)

And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that's not already covered by your framework, etc. That it's not able to completely write 100% of your codebase perfectly from the get-go does not mean it's entirely useless.

[–] Soggy@lemmy.world 28 points 16 hours ago (1 children)

Other than that it's work that junior coders could be doing, to develop the next generation of actual good developers.

[–] SreudianFlip@sh.itjust.works 15 points 16 hours ago* (last edited 16 hours ago) (3 children)

Yes, and that's exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

If you have no junior developers, who will turn into senior developers later on?

load more comments (3 replies)
load more comments (4 replies)
load more comments (10 replies)
[–] CarbonatedPastaSauce@lemmy.world 56 points 23 hours ago* (last edited 23 hours ago) (12 children)

Something any (real, trained, educated) developer who has even touched AI in their career could have told you. Without a 3 month study.

[–] AutistoMephisto@lemmy.world 68 points 23 hours ago* (last edited 23 hours ago) (1 children)

What's funny is this guy has 25 years of experience as a software developer. But three months was all it took to make it worthless. He also said it was harder than if he'd just wrote the code himself. Claude would make a mistake, he would correct it. Claude would make the same mistake again, having learned nothing, and he'd fix it again. Constant firefighting, he called it.

[–] felbane@lemmy.world 20 points 22 hours ago (3 children)

As someone who has been shoved in the direction of using AI for coding by my superiors, that's been my experience as well. It's fine at cranking out stackoverflow-level code regurgitation and mostly connecting things in a sane way if the concept is simple enough. The real breakthrough would be if the corrections you make would persist longer than a turn or two. As soon as your "fix-it prompt" is out of the context window, you're effectively back to square one. If you're expecting it to "learn" you're gonna have a bad time. If you're not constantly double checking its output, you're gonna have a bad time.

load more comments (3 replies)
load more comments (11 replies)
[–] vpol@feddit.uk 54 points 19 hours ago (6 children)

The developers can’t debug code they didn’t write.

This is a bit of a stretch.

[–] Xyphius@lemmy.ca 41 points 18 hours ago

agreed. 50% of my job is debugging code I didn't write.

[–] funkless_eck@sh.itjust.works 17 points 18 hours ago (2 children)

I mean I was trying to solve a problem t'other day (hobbyist) - it told me to create a

function foo(bar): await object.foo(bar)

then in object

function foo(bar): _foo(bar)

function _foo(bar): original_object.foo(bar)

like literally passing a variable between three wrapper functions in two objects that did nothing except pass the variable back to the original function in an infinite loop

add some layers and complexity and it'd be very easy to get lost

load more comments (2 replies)
[–] _g_be@lemmy.world 14 points 15 hours ago (4 children)

Vibe coders can't debug code because they didn't write

load more comments (4 replies)
load more comments (3 replies)
[–] pdxfed@lemmy.world 52 points 15 hours ago (4 children)

Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don't care about things a year away let alone 10.

I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during "downsizing" who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

Hope leaders can be a bit braver and wiser this go 'round so we don't get to a cliffs edge in software.

load more comments (4 replies)
[–] Unlearned9545@lemmy.world 47 points 17 hours ago (3 children)

Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don't have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

load more comments (3 replies)
[–] rimu@piefed.social 45 points 21 hours ago* (last edited 21 hours ago) (5 children)

FYI this article is written with a LLM.

collapsed inline mediaimage

Don't believe a story just because it confirms your view!

[–] AmbiguousProps@lemmy.today 37 points 21 hours ago (5 children)

I've heard that these tools aren't 100% accurate, but your last point is valid.

load more comments (5 replies)
[–] LiveLM@lemmy.zip 31 points 21 hours ago (2 children)

Aren't these LLM detectors super inaccurate?

[–] dsilverz@calckey.world 20 points 21 hours ago (2 children)

@LiveLM@lemmy.zip @rimu@piefed.social

This!

Also, the irony: those are AI tools used by anti-AI people who use AI to try and (roughly) determine if a content is AI, by reading the output of an AI. Even worse: as far as I know, they're paid tools (at least every tool I saw in this regard required subscription), so Anti-AI people pay for an AI in order to (supposedly) detect AI slop. Truly "AI-rony", pun intended.

load more comments (2 replies)
load more comments (1 replies)
load more comments (3 replies)
[–] flamingo_pinyata@sopuli.xyz 39 points 23 hours ago* (last edited 23 hours ago) (1 children)

“fractional CTO”(no clue what that means, don’t ask me)

For those who were also interested to find out this means: Consultant and advisor in a part time role, paid to make decisions that would usually fall under the scope of a CTO, but for smaller companies who can't afford a full-time experienced CTO

[–] zerofk@lemmy.zip 29 points 22 hours ago (3 children)

That sounds awful. You get someone who doesn’t really know the company or product, they take a bunch of decisions that fundamentally affect how you work, and then they’re gone.

… actually, that sounds exactly like any other company.

[–] bigfondue@lemmy.world 19 points 22 hours ago

It's smart. Not every company has a clueless rich guy to hand all the money to

load more comments (2 replies)
[–] dsilverz@calckey.world 37 points 21 hours ago (9 children)

@AutistoMephisto@lemmy.world @technology@lemmy.world

I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots of other's code, legacy code, very shitty code (especially done by my "managers" who cosplayed as programmers), and tons of technical debts.

Even though I'm quite of a LLM power-user (because I'm a person devoid of other humans in my daily existence), I never relied on LLMs to "create" my code: rather, what I did a lot was tinkering with different LLMs to "analyze" my own code that I wrote myself, both to experiment with their limits (e.g.: I wrote a lot of cryptic, code-golf one-liners and fed it to the LLMs in order to test their ability to "connect the dots" on whatever was happening behind the cryptic syntax) and to try and use them as a pair of external eyes beyond mine (due to their ability to "connect the dots", and by that I mean their ability, as fancy Markov chains, to relate tokens to other tokens with similar semantic proximity).

I did test them (especially Claude/Sonnet) for their "ability" to output code, not intending to use the code because I'm better off writing my own thing, but you likely know the maxim, one can't criticize what they don't know. And I tried to know them so I could criticize them. To me, the code is.. pretty readable. Definitely awful code, but readable nonetheless.

So, when the person says...

The developers can’t debug code they didn’t write.

...even though they argue they have more than 25 years of experience, it feels to me like they don't.

One thing is saying "developers find it pretty annoying to debug code they didn't write", a statement that I'd totally agree! It's awful to try to debug other's (human or otherwise) code, because you need to try to put yourself on their shoes without knowing how their shoes are... But it's doable, especially by people who deal with programming logic since their childhood.

Saying "developers can't debug code they didn't write", to me, seems like a layperson who doesn't belong to the field of Computer Science, doesn't like programming, and/or only pursued a "software engineer" career purely because of money/capitalistic mindset. Either way, if a developer can't debug other's code, sorry to say, but they're not developers!

Don't take me wrong: I'm not intending to be prideful or pretending to be awesome, this is beyond my person, I'm nothing, I'm no one. I abandoned my career, because I hate the way the technology is growing more and more enshittified. Working as a programmer for capitalistic purposes ended up depleting the joy I used to have back when I coded in a daily basis. I'm not on the "job market" anymore, so what I'm saying is based on more than 10 years of former professional experience. And my experience says: a developer that can't put themselves into at least trying to understand the worst code out there can't call themselves a developer, full stop.

[–] jj4211@lemmy.world 16 points 20 hours ago

An LLM can generate code like an intern getting ahead of their skis. If you let it generate enough code, it will do some gnarly stuff.

Another facet is the nature of mistakes it makes. After years of reviewing human code, I have this tendency to take some things for granted, certain sorts of things a human would just obviously get right and I tend not to think about it. AI mistakes are frequently in areas my brain has learned to gloss over and take on faith that the developer probably didn't screw that part up.

AI generally generates the same sorts of code that I hate to encounter when humans write, and debugging it is a slog. Lots of repeated code, not well factored. You would assume of the same exact thing is fine in many places, you'd have a common function with common behavior, but no, AI repeated itself and didn't always get consistent behavior out of identical requirements.

His statement is perhaps an over simplification, but I get it. Fixing code like that is sometimes more trouble than just doing it yourself from the onset.

Now I can see the value in generating code in digestible pieces, discarding when the LLM gets oddly verbose for simple function, or when it gets it wrong, or if you can tell by looking you'd hate to debug that code. But the code generation can just be a huge mess and if you did a large project exclusively through prompting, I could see the end result being just a hopeless mess.v frankly surprised he could even declare an initial "success", but it was probably "tutorial ware" which would be ripe fodder for the code generators.

load more comments (8 replies)
[–] Agent641@lemmy.world 31 points 8 hours ago (1 children)

I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.

Let's just call it even.

load more comments (1 replies)
[–] HugeNerd@lemmy.ca 30 points 19 hours ago (3 children)

Computers are too powerful and too cheap. Bring back COBOL, painfully expensive CPU time, and some sort of basic knowledge of what's actually going on.

Pain for everyone!

[–] Thorry@feddit.org 16 points 19 hours ago (1 children)

Yeah I think around the Pentium 200mhz point was the sweet spot. Powerful enough to do a lot of things, but not so powerful that software can be as inefficient and wasteful as it is today.

load more comments (1 replies)
load more comments (2 replies)
[–] deathbird@mander.xyz 24 points 12 hours ago

I think this kinda points to why AI is pretty decent for short videos, photos, and texts. It produces outputs that one applies meaning to, and humans are meaning making animals. A computer can't overlook or rationalize a coding error the same way.

[–] Suffa@lemmy.wtf 19 points 9 hours ago (13 children)

AI is really great for small apps. I've saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.

But anything big and it's fucking stupid, it cannot track large projects at all.

load more comments (13 replies)
[–] SocialMediaRefugee@lemmy.world 19 points 18 hours ago (2 children)

Just sell it to AI customers for AI cash.

load more comments (2 replies)
[–] KazuyaDarklight@lemmy.world 18 points 23 hours ago (1 children)

My big fear with this stuff is security. It just seems so "easy", without knowledgeable people, for AI to write a product that functions from a user perspective but is wide open to attack.

load more comments (1 replies)
load more comments
view more: next ›