this post was submitted on 07 Dec 2025
799 points (97.8% liked)
Technology
77090 readers
3041 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How is what you're describing different to what the author is talking about? Isn't it essentially the same as "AI do this thing for me", "no not like that", "ok that's better"? The trouble the author describes, ie the solution being difficult to change, or having no confidence that it can be safely changed, is still the same.
This poster https://calckey.world/notes/afzolhb0xk is more articulate than my post.
The difference between this "spec-driven" approach is that the entire process is repeatable by AI once you've gotten the spec sorted. So you no longer work on the code, you just work on the spec, which can be a collection of files, files in folders, whatever — but the goal is some kind of determinism, I think.
I use it on a much smaller scale and haven't really cared much for the "spec as truth" approach myself, at this level. I also work almost exclusively on NextJS apps with the usual Tailwind + etc stack. I would certainly not trust a developer without experience with that stack to generate "correct" code from an AI, but it's sort of remarkable how I can slowly document the patterns of my own codebase and just auto-include it as context on every prompt (or however Cursor does it) so that everything the LLMs suggest gets LLM-reviewed against my human-written "specs". And doubly neat is that the resulting documentation of patterns turns out to be really helpful to developers who join or inherit the codebase.
I think the author / developer in the article might not have been experienced enough to direct the LLMs to build good stuff, but these tools like React, NextJS, Tailwind, and so on are all about patterns that make us all build better stuff. The LLMs are like "8 year olds" (someone else in this thread) except now they're more like somewhat insightful 14 year olds, and where they'll be in another 5 years… Who knows.
Anyway, just saying. They're here to stay, and they're going to get much better.
Eh, probably. At least for as long as there is corporate will to shove them down the rest of our throats. But right now, in terms of sheer numbers, humans still rule, and LLMs are pissing off more and more of us every day while their makers are finding it increasingly harder to forge ahead in spite of us, which they are having to do ever more frequently.
They're already getting so much worse, with what is essentially the digital equivalent of kuru, that I'd be willing to bet they've already jumped the shark.
If their makers and funders had been patient, and worked the present nightmares out privately, they'd have a far better chance than they do right now, IMO.
Simply put, LLMs/"AI" were released far too soon, and with far too much "I Have a Dream!" fairy-tale promotion that the reality never came close to living up to, and then shoved with brute corporate force down too many throats.
As a result, now you have more and more people across every walk of society pushed into cleaning up the excesses of a product they never wanted in the first place, being forced to share their communities AND energy bills with datacenters, depleted water reserves, privacy violations, EXCESSIVE copyright violations and theft of creative property, having to seek non-AI operating systems just to avoid it . . . right down to the subject of this thread, the corruption of even the most basic video search.
Can LLMs figure out how to override an angry mob, or resolve a situation wherein the vast majority of the masses are against the current iteration of AI even though the makers of it need us all to be avid, ignorant consumers of AI for it to succeed? Because that's where we're going, and we're already farther down that road than the makers ever foresaw, apparently having no idea just how thin the appeal is getting on the ground for the rest of us.
So yeah, I could be wrong, and you might be right. But at this point, unless something very significant changes, I'd put money on you being mostly wrong.