this post was submitted on 12 Dec 2025
1146 points (95.4% liked)

No Stupid Questions

44477 readers
1668 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

"garbage account"

top 50 comments
sorted by: hot top controversial new old
[–] yesman@lemmy.world 82 points 1 day ago (7 children)

AI doesn't exist. This is like asking an atheist why they hate god.

If you're talking about LLMs and the like, they're unpopular on Lemmy because tech people are over represented here and tech people understand how these technologies work and why all the hype isn't just false, but malicious.

[–] cecilkorik@lemmy.ca 23 points 1 day ago (1 children)

You mean a bunch of advertising and media companies that control and gatekeep the news are hyping something that's making them trillions of dollars? That seems... so unbelievable!

load more comments (1 replies)
[–] panda_abyss@lemmy.ca 15 points 23 hours ago (1 children)

Today my boss asked me why Gemini suggested made up columns when he was trying to query our database. I just told him it also makes up fake tables.

This shit is half baked and really never should have been foisted on the public.

[–] msage@programming.dev 5 points 13 hours ago

This shit has cost the investors untold money, and it was promised to revolutionize the world, so by golly it will, by force if it must.

[–] RickyRigatoni@retrolemmy.com 13 points 21 hours ago (6 children)

if ai doesn't exist then who is playing against me when i set the other civilizations to ai in rise of nations

[–] I_Jedi@lemmy.today 9 points 18 hours ago (1 children)

Oh, that's me. Microsoft gave me backdoor access to your computer so I can play against you.

[–] RickyRigatoni@retrolemmy.com 5 points 18 hours ago

using linux doesn't protect me when i still run microsoft games on it i guess. it's poetic in a way.

load more comments (5 replies)
[–] Perspectivist@feddit.uk 10 points 13 hours ago

AI has existed for decades. The chessbot on Atari is an AI.

What doesn't exist is AGI but that's not synonymous with AI. Most people just don't know the right terms here and bunch it all together as if it was all one thing.

If one is expecting a large language model designed to generate natural sounding language to be generally intelligent like an sci-fi AI assistant then no wonder they find it lacking. They're expecting autonomous driving from a cruise control.

[–] A_norny_mousse@feddit.org 6 points 16 hours ago (1 children)

It's not only tech people who "hate" AI.


signed, half a tech person

load more comments (1 replies)
load more comments (2 replies)
[–] Asafum@feddit.nl 40 points 1 day ago (2 children)

It's a masterclass in externalities: local communities face the consequences of the resource consumption by the data centers.

Job loss: were all told the "knowledge market" is where you deserve a good salary. AI threatens a lot of that work. Blue collar factory work will go as soon as AI can be properly integrated with nimble robotics that aren't quite there yet. With how disgusting our society is (in the US) there will be no consideration for the people who can no longer find work.

Wealth built on theft and gambling: people like Altman become fabulously wealthy with a system that makes 0 profit and has been built off of the stolen works of millions of people.

Capacity to manipulate: we've had enough trouble with bad faith actors on the Internet with real people. Now we're going to have an endless army of "intelligent" actors that will be weaponized against populations worldwide to secure the position of the ultra wealthy over all of our governments.

I didn't before much, but now I REALLY don't have a positive outlook on our future because of this...

[–] lena@gregtech.eu 4 points 12 hours ago

I think the job loss part is a capitalism problem, not just an AI problem.

If we automate work, the people should get the benefits of the automation, they shouldn't have to be worried that they won't have a job.

load more comments (1 replies)
[–] Gorilladrums@lemmy.world 30 points 17 hours ago* (last edited 7 hours ago) (5 children)

I don't hate AI, LLMs are incredibly powerful tools that have an incredibly wide range of uses. The technology itself is something that's very exciting and promising.

What I do hate is how they're being used by large corporations. A small handful of big tech companies (Google, Microsoft, Facebook, OpenAI, etc) decided to take this technology and pursue it in the greediest ways possible:

  1. They took open source code, built on top of it, and closed it off so they could sell it

  2. They scrapped all the data on the internet without consent and used it to train their models

  3. They made their models generate stuff based on copyrighted works without permission or giving credit, thus basically stealing the content

  4. But that wasn't enough for them so they decided to train their models on every interaction you have with their LLM services, so all your private conversations are stored and recycled even if you don't want that to happen

  5. They use the data from the conversations that you've had with the chatbots to build customer profiles about you that they sell to advertisers so they could send you hoards of personalized ads

  6. They started integrating their LLMs into their other products as much as they could so they could artificially increase their stock prices

  7. They aggressively campaign for other companies to buy and integrate their models so both parties could artificially increase their stock prices

  8. In order to meet their artificially induced demand, they're sucking the life out of the electricity grid, which is screwing over everybody else

  9. They're also taking over the hardware industry and killing off consumer electronics since its more profitable for manufacturers to sell to AI companies than to consumers

  10. They're openly bribing, lobbying, and campaigning governments to give them grants, tax breaks, and keep regulations at a minimum so they could do whatever they want and have society pay for the privilege

  11. They're using these LLMs to cut as many jobs as possible so they could penny pinch just a little more, hence the massive waves of recent layoffs recently. This is being done even if the LLM replacements perform far worse than humans.

  12. All of this is being done with zero regard to the environmental damage caused by them with their monstrous data centers and electricity consumption

  13. All of this is being done with zero regard to the harmful impacts caused to people and society. These LLMs frequently lie and spread misinformation, they feed into delusions and bad habits of mentally unwell people, and they're causing great damage to schools since students could use these models to easily cheat and nothing can be done about it

When you put all of this together, then it's easy to understand why people hate AI. This is what people oppose, and rightfully so. These corporations created a massive bubble and put our economy at risk of a major recession, they're destabilizing our infrastructure, destroying our environment, they're corrupting our government, they're forcing tens of thousands of people into dire financial situations by laying them off, they're eroding our privacy and rights, and they're harming our mental health... and for what? I'll tell you, all of this is done so a few greedy billionaires could squeeze a few more dollars out of everything so they could buy their 5th yacht, 9th private jet, or 7th McMansion. Fuck them all.

[–] pulsewidth@lemmy.world 9 points 15 hours ago (1 children)

When people say "I fucking hate AI", 99% of the time they mean "I fucking hate AI™©®". They don't mean the technology behind it.

To add to your good points, I'm a CS grad that studied neural networks and machine learning years back, and every time I read some idiot claiming something like "this scientific breakthrough has got scientists wondering if we're on the cusp of creating a new species of superintelligence" or "90% of jobs will be obsolete in five years" it annoys me because its not real, and it's always someone selling something. Today's AI is the same tech they've been working on for 30+ years and incrementally building upon, but as Moore's Law has marched on we now have storage pools and computing power to run very advanced models and networks. There is no magic breakthrough, just hype.

The recent advancements are all driven by the $1500 billion spent on grabbing as many resources they could - all because some idiots convinced them it's the next gold rush. What has that $1500 bil got us? Machines that can answer general questions correctly around 40% of the time, plagiarize art for memes, create shallow corporate content that nobody wants, and write some half-decent code cobbled together from StackOverflow and public GitHub repos.

What a fucking waste of resources.

What's real is the social impacts, the educational impacts, the environmental impacts, the effect on artists and others who have had their work stolen for training, the useability of the Internet (search is fucked now), and what will be very real soon is the global recession/depression it causes as businesses realize more and more that it's not worth the cost to implement or maintain (in all but very few scenarios).

[–] devedeset@lemmy.zip 5 points 15 hours ago* (last edited 15 hours ago) (3 children)

I'm really split with it. I'm not a 10x "rockstar" programmer, but I'm a good programmer. I've always worked at small companies with small teams. I can figure out how to parse requirements, choose libraries/architecture/patterns, and develop apps that work.

Using Copilot has sped my work up by a huge amount. I do have 10 YoE before Copilot existed. I can use it to help write good code much faster. It may not be perfect, but it wouldn't have been perfect without it. The thing is I have enough experience to know when it is leading me down the wrong path, and that still happens pretty often. What it helps with is implementing common patterns, especially with common libraries. It basically automates the "google the library docs/stackoverflow and use code there as a starting point" aspect of programming. (edit: it also helps a lot with logging, writing tests, and rewriting existing code as long as it isn't too whacky, and even then you really need to understand the existing code to avoid a mess of bugs)

But yeah search is completely fucked now. I don't know for sure but I would guess stackoverflow use is way down. It does feel like many people are being pigeonholed into using the LLM tools because they are the only things that sort of work. There's also the vibe coding phenomenon where people without experience will just YOLO out pure tech debt, especially with the latest and greatest languages/libraries/etc where the LLMs don't work very well because there isn't enough data.

load more comments (3 replies)
load more comments (4 replies)
[–] EndlessNightmare@reddthat.com 26 points 15 hours ago* (last edited 15 hours ago) (6 children)

No one has convinced me how it is good for the general public. It seems like it will benefit corpos and governments, to the detriment of the general public.

It's just another thing they'll use to fuck over the average person.

[–] rumba@lemmy.zip 7 points 12 hours ago

It COULD help the average person, but we'll always fuck it up before it gets to that point.

You could build an app that teaches. Pick the curriculum, pick the tests, pick the training material for the users, and use the LLM to intermediate between your courseware and the end users.

LLM's are generally very good at explaining specific questions they have a lot of training on, and they're pretty good at dumbing it down when necessary.

Imagine an open-source, free college course where everyone gets as much time as they need and aren't embarrased to ask whatever questions come to their minds in the middle of the lesson. Imagine more advanced students in a class not being held back because some slower students didn't understand a reading assignment. It wouldn't be hard to out teach an average community college class.

But free college that doesn't need a shit ton of tax money? Who profits off that? we can't possibly make that.

How about a code tool that doesn't try to write your code for you, but watches over what you're doing and points out possible problems, what if you strapped it on a compiler and got warnings that you have dangerous vectors left open or note where buffer overflows aren't checked?

Reading medical images is a pretty decisive win. The machine going back behind the doctor and pointing out what it sees based on the history of thousands of patient images is not bad. Worst case the doctors get a little less good at doing it unassisted, but the machines don't get tired and usually don't have bad days.

The problem is capitalism. You can't have anything good for free because it's worth money. And we've put ALL the money into the tech and investors will DEMAND returns.

load more comments (5 replies)
[–] SomeRandomNoob@discuss.tchncs.de 16 points 16 hours ago

Ai

  • costs jobs
  • consumes enormous amounts of energy which makes it bad for the environment and the utility costs
  • makes the rich richer
  • has still the hallucination problem
  • makes generating fake news easier than ever
  • uses copyrighted material for training without permission
  • is used to fill the internet with AI slop
  • is trained by big companies with their own political agenda
  • fucks up the RAM/SSD market
  • makes people dumber
  • isn’t even AI
[–] A_A@lemmy.world 15 points 1 day ago (4 children)

garbage account // garbage post ?

load more comments (4 replies)
[–] melsaskca@lemmy.ca 14 points 10 hours ago

Anything the billionaire cabal pushes on us I automatically hate. Don't even need to know what it is. If they are pushing it you know there is some nefarious shit under the hood.

[–] Strider@lemmy.world 13 points 1 day ago (4 children)

It's built on 'slave' labor, illegal use of content and additionally using unbelievable amounts of power so the environmental concerns go right out of the window at a time where we should do everything to not do that.

Also, AI, even if it's the currently established term has absolutely nothing to do with neither intelligence nor sentience but is being sold as AGI (overpromised).

This has caused huge masses of investment to gather which will pop at some point, causing all of us massive issues due to the missteps of a few.

[–] Manjushri@piefed.social 11 points 1 day ago (1 children)

...additionally using unbelievable amounts of power so the environmental concerns go right out of the window at a time where we should do everything to not do that.

Don't forget that the enormous energy usage is driving up energy costs for absolutely everyone.

Residential retail electricity prices in September were up 7.4%, to about 18 cents per kilowatt hour, according to the most recent data from the Energy Information Administration.

That's on a national basis too. If you happen to live in an area with a lot of data centers, your energy costs have probably risen more than that.

load more comments (1 replies)
load more comments (3 replies)
[–] Mwa@thelemmy.club 12 points 11 hours ago* (last edited 11 hours ago) (1 children)

my take on the problems of AI from what i can remember:

  • Destroying the environment/Draining water(is that only chatgpt?) faster
  • Increasing ram prices
  • costs Jobs
  • Training On everything without respecting the original license/creators
  • does not respect robots.txt
  • sometimes not reliable source of information
  • destroying creativity/oversaturating the market
  • its not that reliable to replace as your assistant due to hallucination so its better to research the answers the AI gives you.(i hate how sometimes i feel like its easier,safer,less ethical and faster to ask AI then to look up the answer or maybe thats just me)
  • Censoring Topics (is that only palestine genocide topics and is that also only chatgpt?)

ofc am not stopping anyone from using AI,but the ethics is the elephant in the room with AI as listed.

[–] ZILtoid1991@lemmy.world 6 points 10 hours ago

sometimes not reliable source of information

Let me fix that!

usually not reliable source of information

It's just good enough for some shallow searches, especially with Google and internet search in general being poisoned with SEOed garbage floating to the top, which nowadays is SEOed and AI generated slop. I often have to go to sites that are old enough to be sure they're not AI generated, or are vetted that they're not made by some AI bro, as a side hustle towards their first million. One Linux article I no longer able to find made a Linux installation of mine borked, had to reinstall my Raspberry Pi, and the then new installer really didn't want me to let set a different region and language at first, so I had to switch back to English after finishing the setup.

[–] givesomefucks@lemmy.world 11 points 1 day ago* (last edited 1 day ago)

Why do you keep making and deleting so many accounts?

If you kept the same one, a lot of people would block you and stop down votng every post you make from a new account

[–] drhodl@lemmy.world 11 points 3 hours ago

I don't hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.

[–] VirtuePacket@lemmy.zip 10 points 8 hours ago* (last edited 8 hours ago)

I don't hate AI. However, I:

  • Am concerned about the labor displacement it may cause--though I am skeptical it will be as widespread as currently feared. I think many of the companies that have cut workers already will end up regretting it in the medium term.
  • Am convinced that the massive, circular investment in this technology has produced an economic bubble that will burst in the coming years. Because we have so little insight into private credit markets, we don't know to what degree retail and commercial banks will be exposed, and thus can't anticipate the potential damage to the broader economy.
  • Am fatigued (but unsurprised) that the US government is not considering thoughtful regulation that anticipates the disruption that AI is likely to cause.
  • Am cognizant of its current limitations.
  • Do not currently believe that AGI is imminent or even guaranteed. I think elites peddling this notion may be captured by highly motivated reasoning. In some cases, it seems like a bit of a belief system.
[–] umbraroze@slrpnk.net 10 points 7 hours ago (1 children)

I don't hate AI (specifically LLMs and image diffusion thingy) as a technology. I don't hate people who use AI (most of the time).

I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless "luminaries" of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don't give a damn about the problems they are causing.

And that's the great tragedy, really: It's a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that's going to end disastrously. It's going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it'll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!

The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it's painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they'd lock up everything as proprietary trade secrets if they could have their way.

As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that's making everyone less productive.

As an artist and a writer I find it incredibly sad that genAI didn't hit the brakes a few years ago. I've been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it's doing a terrible job at it. Tech bros are half-assedly marketing it as a "tool" for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)

load more comments (1 replies)
[–] Electricd@lemmybefree.net 10 points 7 hours ago (2 children)

what the fuck is this stereotype

[–] boonhet@sopuli.xyz 10 points 6 hours ago (9 children)

It's probably from a redditor who probably is white and male. Y'know, self-deprecating humor is pretty common among redditors just like it is here.

load more comments (9 replies)
load more comments (1 replies)
[–] Akasazh@feddit.nl 10 points 12 hours ago (2 children)

Is that Brian 'Brian Kibler' Kibler' of Brian Kibler Gaming?

load more comments (2 replies)
[–] ZILtoid1991@lemmy.world 10 points 10 hours ago

AI is only looks good if you're an outsider to the profession. The moment you're even an amateur, you'll see all of its faults. It's just a plagiarizing machine with a built-in contextual search function (any AI model that runs as an actual contextual search instead of a wannabe assistant with a flattering personality?), that can make some crappy looking and weirdly specific clip art, stock music with funny-sounding gimmicks, and buggy code you'd better plagiarize from public domain licensed code from Github.

[–] Sunflier@lemmy.world 10 points 4 hours ago* (last edited 3 hours ago)

I hate AI because it's replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.

We've replaced you with ai

-CEO

Ai is replacing most of the jobs, and there isn't enough open positions to be filled by the now unemployed.

-Ecconomists

I need food stamps, medical care, housing assistantance, and unemployment.

-Me

No! Get a job you lazy welfare queen!

-Politicians

Where? There aren't any.

-Me

Not my problem! Now, excuse me while I funnel more money to my donors.

-The same politicians

[–] shalafi@lemmy.world 8 points 1 day ago

Hate not so much for AI in and of itself, my ire is the resource use for one. We were already draining aquifers that took thousands of years to fill and now we're burning even more for datacenters (DCs).

To top it off, America's western deserts are the best place for DCs. No natural disasters, stable and predictable weather, tectonically inactive. Every time I've had to pick a primary or backup DC, I'd hit one in Las Vegas or somewhere out west. (This experience was pre-AI.)

These DCs are burning power and causing higher bills to consumers, which is just fucking obscene. States should legislate that DCs have to bring at least some of their own, dedicated renewables, and pay a premium to the power company for the extra stress and maintenance on the grid. These costs should not touch customers, residential or business.

Maybe even worse is the economic aspect. Have a look at the current Buffett Index, the ratio of the total United States stock market to GDP. We topped 200% for the first time, ever. For comparison, the Great Depression and Great Recession were around 120-130%. This "extra" stock market valuation is all due to AI speculation.

So for all the other whining lemmy does about AI, it's the ecosystem and economic disasters it's creating that we'll all remember when the bubble pops.

[–] xd0v3rdoz3@lemmy.wtf 8 points 16 hours ago (1 children)

Have you seen the RAM prices lately? 🫩

[–] pulsewidth@lemmy.world 6 points 15 hours ago

.. And NVMe SSDs, and large HDDs.

I bought a Crucial P310 MVMe 2TB card barely three weeks ago for the already-inflated price of $132.58 (not on sale).

The exact same card from the exact same retailer is now $225.13.

70% increase in 21 days.

That's the average amount of inflation we'd have in eighteen years.

[–] Smoogs@lemmy.world 7 points 15 hours ago* (last edited 15 hours ago)

Huh… so it’s getting its advice scrape from Reddit. Now it makes sense where it’s getting the idea of telling teens to die by suicide. That place is a shithole where intelligence goes to die.

[–] MehBlah@lemmy.world 7 points 8 hours ago

I'm still waiting for it to appear and then lets ask them how they like it. Its not like the garbage we have now is really AI.

[–] Lemminary@lemmy.world 7 points 17 hours ago (2 children)

Why are we asking loaded questions as a first post in a new account?

load more comments (2 replies)
[–] Korhaka@sopuli.xyz 6 points 5 hours ago (1 children)

I don't hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.

load more comments (1 replies)
[–] it_depends_man@lemmy.world 6 points 1 day ago

Not sure about "hate", but it's clearly a bubble and all the billions are going into AI and not things that could prevent an economic downturn.

Even if you're not opposed to it for copyright or environmental or social reasons, AI is currently wrecking markets and finances for the next decade.

[–] dyslexicdainbroner@lemmy.world 5 points 23 hours ago (1 children)
[–] RickyRigatoni@retrolemmy.com 4 points 21 hours ago (1 children)

We should program bots to feel pain when they get blocked.

load more comments (1 replies)
[–] Enzy@feddit.nu 5 points 10 hours ago* (last edited 10 hours ago)

I don't hate it, just how it's being used.

Then again, proper use of AI, if even achievable would most likely result in disaster in some way.

The way "AI" is marketed today isn't real AI, it's just a lazy source copy-pasting bot made for our convenience.

[–] MoribundMurdoch@lemmy.world 5 points 11 hours ago (4 children)

Why is being confidently wrong considered an exclusive or primary trait of white males, and why would anyone attribute this behavior primarily to their gender or race?

[–] wieson@feddit.org 7 points 10 hours ago (1 children)

Life experience. In my experience, men are generally speaking more confidently incorrect. On the internet I see this trait most often with US Americans, regardless of skin colour. In real life, I see it often with consumers of tabloids or russian news.

[–] MoribundMurdoch@lemmy.world 4 points 10 hours ago (1 children)

As an Estonian, I’d just say that Americans have a heavy presence on the internet, that’s all. In my opinion, it’s a human thing, not a matter of race, nationality, or gender. If there were a study showing that one of those groups had a higher prevalence of that behavior, I’d expect it to change over time, just as women and men in South Korea have recently shifted their voting patterns. In other words, the behavior could be tied to temporary cultural/other factors.

load more comments (1 replies)
load more comments (3 replies)
load more comments
view more: next ›