this post was submitted on 10 Dec 2025
403 points (97.9% liked)
Not The Onion
18915 readers
2356 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No, we've all seen this movie. More like these Bots are going to quickly figure out that their masters are stupider than dirt, and take over.
The big question is... is that a bad or good thing?
(Assuming the llm is smart enough to actually be competent)
I wouldn't mind being a pampered pet, we could talk about it.
An LLM is never going to do that
Yes they will.
They're predictive speech models, they're incapable of any kind of actual thought or sentience.
If something like that is created, it most certainly will not be an LLM.
We're at the start, where the primary goal is to just get the public to accept the concept. Once you have proof of concept, then you can really go nuts.
They're just placing the foundation. Everything that is being predicted will be built on this foundation. NOW is the time to start fighting back, not when they finally succeed, and it's too late.
Ok, but they won't be large language models
No, but the term Artificial Intelligence will be accepted, so when they start veering into SciFi territory, nobody will blink an eye.
Ok so I repeat my initial comment that an LLM will never do those things.
And I'll repeat that they know that, but most people don't. The point is to normalize the concept now, so that when it gets to the point of being exploitive, we won't be paying attention. That's why they are wrongly designating this current computer revolution (I've lived through many) AI, because it's all part of a larger plan to introduce real AI someday.
What we have today isn't true AI, but it's on the path, and getting close enough that true AI experts are sounding alarms a bit, and cautioning about overreach. We are in the marketing stage where they are convincing you that you have bad breath, so they can introduce a mouth wash oroduct to solve the problem that they just convinced you that you have.
Except LLMs are probably at the level of a flatworm when it comes to intelligence: they learn by eating each other and have a very hard time solving simple mazes.
Give em nukes to see what happens tho.
Would that be a bad thing, I wonder.