this post was submitted on 09 Sep 2025
501 points (98.6% liked)
Technology
74966 readers
2727 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.
Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.
As someone who is excited about AI and thinks it's pretty neat, I agree we've needed a level-set around the expectations. Vibe coding isn't a thing. Replacing skilled humans isn't a thing. It's a niche technology that never should've been sold as making everything you do with it better.
We've got far too many companies who think adoption of AI is a key differentiator. It's not. The key differentiator is almost always the people, though that's not as sexy as cutting edge technology.
The technology is fascinating and useful - for specific use cases and with an understanding of what it's doing and what you can get out of it.
From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn't at the point where it makes any fucking sense to have it plugged into fucking everything.
Leaving the questionable ethics many paid models' creators have used to make their models aside, the backlash against so is understandable because it's being shoehorned into places it just doesn't belong.
I think eventually we may "get there" with models that don't make so many obvious errors in their output - in fact I think it's inevitable it will happen eventually - but we are far from that.
I do think that the "fuck ai" stance is shortsighted though, because of this. This is happening, it's advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I'm more inclined to believe it's when) we have functional models that can are accurate in their output.
When it actually makes sense to replace virtually every profession with ai (it doesn't right now, not by a long shot) then how are we going to deal with this as a society?
Evidently you haven't worked with me. I'm actually quite sexy.
I've got a friend who has to lead a team of apparently terrible developers in a foreign country, he loves AI, because "if I have to deal with shitty code, send back PRs three times then do it myself, I might as well use LLMs"
And he's like one of the nicest people I know, so if he's this frustrated, it must be BAD.
I had to do this myself at one point and it can be very frustrating.
It's basically the "tech makes lots of money" effect, which attracts lots of people who don't really have any skill at programming and would never have gone into it if it weren't for the money.
We saw this back in earlier tech booms and see it now in poorer countries to were lots of IT work has been outsourced - they still have the same fraction of natural techies as the rest but the demand is so large that masses of people with no real tech skill join the profession and get given actual work to do and they suck at it.
Also beware of cultural expectations and quirks - the team I had to manage were based in India and during group meetings on the phone would never admit if they did not understood something of a task they were given or if there was something missing (I believe that it was so as not to lose face in front of others), so ended up often just going with wrong assumptions and doing the wrong things. I solved this by, after any such group meeting, talking to each member of that outsourced team, individually and in a very non-judgemental way (pretty much had to pass it as "me, being unsure if I explained things correctly") to tease from them any questions or doubts, which helped avoid tons of implementation errors from just not understanding the Requirements or the Requirements themselves lacking certain details and devs just making assumptions on their own about what should go there.
That said, even their shit code (compared to what us on the other side, who were all senior developers or above, produced) actually had a consistent underlying logic throughout the whole thing, with even the bugs being consistent (humans tend to be consistent in the kind of mistakes they make), all of which helps with figuring out what is wrong. LLMs aren't as consistent as even incompetent humans.
Cyberspace, hypertext, multimedia, dot com, Web 2.0, cloud computing, SAAS, mobile, big data, blockchain, IoT, VR and so many more. Sure, they can be used for some things, but doing that takes time, effort and money. On top of that, you need to know exactly when to use these things and when to choose something completely different.
I'm so sick of "AI demos" at work. Every demo goes like this.
Meanwhile they ignore that zero AI projects have actually stuck around or get used in a meaningful way.
As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.