this post was submitted on 04 Aug 2025
489 points (91.9% liked)
Political Memes
9098 readers
3108 users here now
Welcome to politcal memes!
These are our rules:
Be civil
Jokes are okay, but don’t intentionally harass or disturb any member of our community. Sexism, racism and bigotry are not allowed. Good faith argumentation only. No posts discouraging people to vote or shaming people for voting.
No misinformation
Don’t post any intentional misinformation. When asked by mods, provide sources for any claims you make.
Posts should be memes
Random pictures do not qualify as memes. Relevance to politics is required.
No bots, spam or self-promotion
Follow instance rules, ask for your bot to be allowed on this community.
No AI generated content.
Content posted must not be created by AI with the intent to mimic the style of existing images
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I kinda like AI.
Liking "AI" is kind of a modern day equivalent of liking gas-guzzling, enormous pickup trucks. Sure, it takes you from point A to point B and can tow shit, but there are many more human-friendlier alternatives that does not make you look like a giant douche.
It takes less power to run my local model than it does to play Baldur's Gate 3, but I haven't seen anybody shaming people for playing games. Not every LLM is a giant wasteful cloud provider, many are open source and self hosted.
It's kind of like saying all vehicles are gas guzzling enormous pickup trucks and therefore nobody should travel anywhere. Self hosting on a PC you already own is more like riding a bike in this metaphor.
Show me the percentage of AI prompts done on local models, and if it's more than a rounding error, I'll eat my hat.
Also, was your local model trained locally?
What percentage of trips are done on a bicycle?
I'm working on software to help more people do it, but I fear that anti-ai sentiment has lost focus on the problem. Local models are super useful for assistance with code, writing, and all sorts of general tasks. I've been working on a tool that allows you to tell the computer what you want and it generates a command line prompt with an explanation of how it works.
Going back to my analogy, it doesn't matter what you do with your special case of LLM tech. What matters is that most are using it in a destructive way. People do not care if you use your massive pick-up truck to cure cancer. People see a dumb pick-up truck, they assume you voted for Trump. So I'm pretty sure my analogy stands.
And the answer for the training question is a "No", I presume.
Yes, I agree, but I think it does matter where we go from here. We could say all vehicles are bad, or we could focus on the source of the problem. Corporate AI is what's using all the electricity and water, it's what's creating the worst issues.
Most open source models don't need additional training, they're already plenty good for most plain language tasks and the weights are all free to use. Why would I waste power doing my own training when the public options are perfectly adequate?
Do you know how much energy and what sources where used to train your model?
No, though I'm not really sure how it makes a difference. If I used a different model that was made using less resources what would be improved? Both already exist, using one over the other would not save any energy.
Edit: To go back to the BG3 comparison, I don't know how many resources were used to make the game, or what sources the developers pulled from. I play it because it's fun and another person enjoying it doesn't cost the world anything (except a little electricity)
That's why it's a huge, existential threat and may doom us all.
It is not. It's a shpiele that LLM companies push to make it appear more powerful than it actually is.
If you think the perceived threat of these things is some kind of super-AI that takes over the world, you have fallen for the spiel also, which is a huge distraction from the real threat, which is how it replaces search-engines and fact-checking and user research on any topic entirely and is addictive and growing in popularity and can be used by their manufacturers to push narratives and propaganda with an effectiveness we've never imagined.
That has been replaced by Facebook years ago. See: Brexit.
Not many people care about facts. Those who do, will not be stopped by LLMs.
I'm not sure what you're saying here, yes social media is already being used to do this, and LLM's will be used to do the same thing but with a great deal more effectiveness, and if you don't think it's going to have a massive impact on the world and make everything worse to a disastrous degree, you're not paying any attention and I almost envy you.
It will absolutely make things worse.
It does not a potential danger to humanity, as some put it.
As bad as the AI hype is, the anti-AI hype is equally delusional. Don't get sucked in either direction, It will far more disruptive and dangerous on a societal scale than any technological paradigm we've dealt with and could very well lead to global catastrophe. Just not the way the sci-fi authors dream of.
You used the words "existential threat".
And I stand by it, the threat of AI tools for disrupting societies and influencing policy and global narratives is going to be so massively divisive that it could lead to large-scale wars or even nuclear wars in worst cases, and widespread economic disruption which will teach us all pretty damn fast that we shouldn't have ignored threats to the global supply chain.