this post was submitted on 03 May 2025
828 points (97.7% liked)
Technology
69702 readers
2699 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.
Todays "AI" is just machine learning code. It's been around for decades and does a lot of good. It's most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.
Even some language learning machines can do good, it's the shitty people that use it for shitty purposes that ruin it.
Sure I know what it is and what it is good for, I just don't think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks' work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It's a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.
This statement tells me you don't understand how many industries are using machine learning and how many lives it saves.
That's great. We can schedule it like heroin for professional use only, then.
They are just harmless fireworks. They are even useful for signaling ships at sea of dangerous tides.
Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans...
“guns don’t kill people, people kill people”
Yes. Fuck the owners and fuck their machine guns.
I disagree. It may seem that way if that's all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it's really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that'll shake themselves out over the coming years to help improve productivity.
It's a big change, for sure, but it's one we'll navigate, probably in similar ways that we've navigated other challenges, like scams involving spoofed webpages or fake calls. We'll figure out who to trust and how to verify that we're getting the right info from them.
LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.
Sure, and we had an internet before the world wide web (ARPANET). But that wasn't hugely influential until it was expanded into what's now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.
LLMs are no different. Yes they're built on older tech, but that doesn't change the fact that they're a monumental shift from what we had before.
Let's look at access to information and misinformation. The process was something like this:
We're in the transition from 5 to 6, which is similar to the transition from 3 to 4. I'm old enough to have seen each of these transitions.
The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It's quite literally a paradigm shift.
Enshittification is a paradigm shift, but not one we associate with the birth of the internet.
On to your list. Why does misinformation appear after the birth of the internet? Was yellow journalism just a historical outlier?
What you're witnessing is the "Red Queen hypothesis". LLMs have revolutionized the scam industry and step 7 is an AI arms race against and with misinformation.
It certainly existed before. Physical encyclopedias and newspapers weren't perfect, as they frequently followed the propaganda line.
My point is that a lot of people seem to assume that "the internet" is somewhat trustworthy, which is a bit bizarre. I guess there's the fallacy that if something is untrustworthy, it won't get attention, but instead things are given attention if they're popular, by some definition of "popular" (i.e. what a lot of users want to see, what the platform wants users to see, etc).
Well yeah, every technological innovation will be used for good and ill. The Internet gave a lot of people a voice who didn't have it before, and sometimes that was good (really helpful communities) and sometimes that was bad (scam sites, misinformation, etc).
My point is that AI is a massive step. It can massively increase certain types of productivity, and it can also massively increase the effectiveness of scams and misinformation. Whichever way you look at it, it's immensely impactful.