this post was submitted on 24 Nov 2025
199 points (95.0% liked)

Ask Lemmy

35688 readers
1755 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I want to let people know why I'm strictly against using AI in everything I do without sounding like an 'AI vegan', especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

top 50 comments
sorted by: hot top controversial new old
[–] givesomefucks@lemmy.world 61 points 3 days ago* (last edited 3 days ago) (18 children)

If it's real life, just talk to them.

If it's online, especially here on lemmy, there's a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you're wasting time.

They also tend to follow you around.

They've lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

load more comments (18 replies)
[–] Blemgo@lemmy.world 44 points 3 days ago* (last edited 3 days ago) (4 children)

Maybe trying to be objective is the wrong choice here? After all, it might sound preachy to those who are ignorant to the dangers of AI. Instead, it could be better to stay subjective in hopes to trigger self-reflection.

Here are some arguments I would use for my own personal 'defense':

  • I like to do the work by myself because the challenge of doing it by my own is part of the fun, especially when I finally get that 'Eureka!' moment after especially tough ones. When I use AI, it just feels halfhearted because I just handed it to someone else, which doesn't sit right with me.
  • when I work without AI, I tend to stumble over things that aren't really relevant to what I'm doing, but are still fun to learn about and might be helpful sometimes else. With AI, I'm way too focused on the end result to even notice that stuff, which makes the work feel even more annoying.
  • when I decide to give up or realize I can't be arsed with it, I usually seek out communities or professionals, because that way it's either done professionally or I get a better sense of community, but overall feel like I'm supporting someone. With AI, I don't get that feeling, but rather I only feel either inferior for not coming up with a result as fast as the AI does or frustrated because it either spews out bullshit or doesn't get the point I'm aiming for.
[–] enchantedgoldapple@sopuli.xyz 11 points 3 days ago* (last edited 3 days ago) (1 children)

This is a brilliant idea! I was wondering whether talking subjectively would be detrimental to my point, but having it explained this way is so much better. I think the key point here is to not berate the other person for using AI in between this explanation.

[–] Blemgo@lemmy.world 9 points 2 days ago

It goes a bit further than just not berating. People often get defensive when you criticise something they like, which makes it harder to argue due to the other side suddenly treating the discussion as a fight. However by saying "it's not for me" in a rather roundabout way you shift the focus away from "is it good/bad" and more about whether the other can empathise with your reasoning, and in turn reflect your view onto themselves and maybe realize that they didn't notice something about their usage and feelings about AI that you already did.

load more comments (3 replies)
[–] canofcam@lemmy.world 30 points 2 days ago (2 children)

A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, "alarming them to take action" then that is by default a bad faith discussion.

If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.

How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren't trying to 'win' you are just trying to understand and in turn, help others to understand your own POV.

[–] Zoomboingding@lemmy.world 7 points 2 days ago (1 children)

Once you realize you can change your opinion about something after you learn about it, it's like a super power. So many people only have the goal of proving themselves right or safeguarding their ego.

It's okay to admit a mistake. It's normal to be wrong about things.

load more comments (1 replies)
[–] krooklochurm@lemmy.ca 6 points 1 day ago* (last edited 1 day ago) (5 children)

Chiming in here:

Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.

Having issues with the material the model is trained on isn't an issue with ai - it's an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it's been developed.

For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

I've never heard anyone say "we need less data centers" until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it's something you give a fuck about. Which you don't.

If a model, once trained, is being used entirely locally on someone's personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.

It's absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.

It doesn't make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren't arguing in good faith.

If you enslave children to build a house then the issue isn't that youre building a house, and it doesn't mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.

Like any complicated topic there's nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn't being intellectually honest.

[–] frezik@lemmy.blahaj.zone 9 points 1 day ago (2 children)

I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they've already broken ground, but it's far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.

Your whole post is missing out on the difference in scale involved. GPU power consumption isn't comparable to standard web servers at all.

load more comments (2 replies)
load more comments (4 replies)
[–] s@piefed.world 27 points 2 days ago

“It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”

[–] Jhex@lemmy.world 21 points 2 days ago (1 children)

I'm just honest about it… "I don't find it useful enough and do find it too harmful for the environment and society to use it"

[–] runner_g@lemmy.blahaj.zone 11 points 2 days ago

And you then spend longer verifying the information its given you than you would have spent just looking it up to begin with.

[–] FlashMobOfOne@lemmy.world 18 points 2 days ago

Very simple.

It's imprecise, and for your work, you'd like to be sure the work product you're producing is top quality.

[–] MourningDove@lemmy.zip 16 points 2 days ago

Just do what I do and say that you think it’s hot garbage that dehumanizes everything and everyone that use it.

Then go on to not give a shit what they think about it.

[–] iii@mander.xyz 15 points 3 days ago (2 children)

In a way aren't you asking "how can I be an AI vegan, without sounding like an AI vegan"?

It's OK to be an AI vegan if that's what you want. :)

[–] its_kim_love@lemmy.blahaj.zone 16 points 3 days ago* (last edited 3 days ago) (5 children)

Stop trying to make AI Vegan work. It's never going to stick. AFAIK this term is less than a week old and smuggly expecting everyone to have already assimilated it is bad enough, but it's a shit descriptor that is trading in right leaning hatred of 'woke' and vegans are just a scape goat to you.

Explain how AI haters or doubters cross over with Veganism at all as a comparison?

[–] Evkob@lemmy.ca 12 points 3 days ago (2 children)

Explain how AI haters or doubters cross over with Veganism at all as a comparison?

They're both taking a moral stance regarding their consumption despite large swathes of society considering these choices to be morally neutral or even good. I've been vegan for almost a decade and dislike AI, and while I don't think being anti-AI is quite as ostracizing as being vegan, the comparison definitely seems reasonable to me. The behaviour of rabid meat eaters and fervent AI supporters are also quite similar.

load more comments (2 replies)
[–] NFord@piefed.social 6 points 3 days ago (1 children)

Like veganism, abstaining from AI is arguably better for the environment.

load more comments (1 replies)
load more comments (3 replies)
[–] HikingVet@lemmy.ca 13 points 3 days ago (7 children)

The fuck is an AI vegan? There isn't meat and AI isn't food.

[–] Beardsley@lemmy.world 15 points 3 days ago (4 children)

Your bed isn't really made for a king or queen.

[–] Triumph@fedia.io 12 points 3 days ago

The fuck it's not.

load more comments (3 replies)
[–] jjjalljs@ttrpg.network 9 points 3 days ago (2 children)

It seems to mean people who don't consume AI content not use AI tools.

My hypothesis is it's a term coined by pro-AI people to make AI-skeptics sound bad. Vegans are one of the most hated groups of people, so associating people who don't use AI with them is a huge win for pro-ai forces.

Side note: do-gooder derogation ( https://en.wikipedia.org/wiki/Do-gooder_derogation ) is one of the saddest moves you can pull. If you find yourself lashing out at someone because they're doing something good (eg: biking instead of driving, abstaining from meat) please reevaluate. Sit with your feelings if you have to.

load more comments (2 replies)
[–] dohpaz42@lemmy.world 8 points 3 days ago

It’s called a euphemism. We all know that a vegan is someone who does not use animal products (e.g. meat, eggs, dairy, leather, etc). By using AI in front of the term vegan, OP intimates that they do not use AI products.

I suspect you’re smart enough to know this, but for some reason you’re being willfully obtuse.

~Then again, maybe not. 🤷‍♂️~

load more comments (4 replies)
[–] Treczoks@lemmy.world 11 points 2 days ago (12 children)

All current AIs are based on stolen content.

load more comments (12 replies)
[–] venusaur@lemmy.world 10 points 3 days ago (1 children)

The most reasonable explanation I’ve heard/read is that generative AI is based on stealing content from human creators. Just don’t use the word “slop” and you’ll be good.

load more comments (1 replies)
[–] captainlezbian@lemmy.world 10 points 2 days ago

I want my creations to be precisely what I intend to create. Generative Ai makes it easier to make something at the expense of building skills and seeing their results

[–] FaceDeer@fedia.io 10 points 3 days ago

and also alarming enough for them to take action.

Is this really an intent to explain in good faith? Sounds like you're trying to manipulate their opinion and actions rather than simply explaining yourself.

If someone was to tell me that they simply don't want to use generative AI, that they prefer to do writing or drawing by hand and don't want suggestions about how to use various AI tools for it, then I just shrug and say "okay, suit yourself."

[–] solomonschuler@lemmy.zip 9 points 1 day ago (2 children)

I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

The second reason why I don't like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like "chatGPT-5 is terrifying" or "openAI has fired 70,000 employees over AI improvements" they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that's how much I frown on these unethical practices.

I often joke about this to people who don't believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they've fired X amount of employees for "AI improvements" why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than "the reason X amount of employees were fired was because of AI improvement"

load more comments (2 replies)
[–] Ludrol@szmer.info 9 points 2 days ago

"There are emerging studies about AI induced psychosis[1], and there is a possibility to go psychotic even if one doesn't have pre-conditions to become one. I would like to be cautious with the danger, like with cigaretes or Thalidomide. You never know how it might be dangerous."


[1] https://arxiv.org/pdf/2507.19218

[–] _cryptagion@anarchist.nexus 9 points 3 days ago

just say that you don't want to use it. why are you trying to figure out good reasons that somebody else came up with to not use something you have to elect to use in the first place? just say "I don't want to use genAI". you don't need to explain yourself any further than that.

[–] agent_nycto@lemmy.world 7 points 2 days ago

What do you normally say that you're worried sounds like an "Ai vegan"?

[–] Lladra@lemmy.world 6 points 2 days ago

You'd rather make your own painting than fill in a coloring book?

[–] PeriodicallyPedantic@lemmy.ca 6 points 1 day ago (2 children)

Depending on how hardcore you are about it, you can't.

Are you getting up in people's face to tell them not to use it, or are you answering why you choose not to use it?
Are you extremely strict in your adherence? Or are you more forgiving based on the application or user?

There are two general points I like to make:

  1. Big companies are using it to steal the work of the powerless, en masse. It is making copyright strictly the tool of the powerful to use against the powerless.
  2. If these companies aren't lying and will actually deliver what they say they're going to deliver in the timeline they stated, then it's going to cause mass unemployment, because even if (IF) this creates new jobs for every job it destroys, the market can't move fast enough to invent these new careers in the timeline described. So either they're lying or they're going to cause great suffering, and a massive increase in wealth inequality.

Energy usage honestly never seems to be a concern for people, so I don't even try to make that argument.

load more comments (2 replies)
[–] quediuspayu@lemmy.dbzer0.com 6 points 3 days ago

What is your viewpoint?
Mine, for example, is that not only I don't need it at all but it doesn't offer anything of value to me so I can't think of any use for it.

[–] erlend_sh@lemmy.world 6 points 1 day ago

Here’s a piece I wrote to explain my apprehensive stance on AI to friends and colleagues: https://blog.erlend.sh/non-consensual-technology

[–] ragebutt@lemmy.dbzer0.com 6 points 1 day ago

What is an “extremist view” in this context? Kill sam Altman? Lmao

Welcome to the world of being an activist buddy. Vegans are doing it for a living being with consciousness. Your cause is just too, imo, but just like the vegan who feels motivated and justified in bringing up their views because, to them, it’s a matter of life and death you will be belittled and mocked by those who either genuinely disagree or who do recognize the issues you describe but do not have the courage or self control to change

Start with speaking when it’s relevant. Note that this will not always win you fans. I recently spoke to my physician on this issue, who asked for consent for LLM transcription of audio session notes and automatic summarization. I am not morally opposed to such a thing for health care providers but I had many questions: how are records transmitted, stored, destroyed, does the model use any data fed into it or resultant summaries for seeding/reinforcement learning/refinement/updating internal embeddings/continual learning (this point is key bc the language I’ve seen about this shifts a lot, but basically do they feed your data back into the model to refine it further or do they have separate training and production models that allow for one to be “sanitary”), does the AI model come from the EMR provider (often Epic) or a 3rd party and if so is there a BAA, etc

In my case my provider could answer exactly 0 (zero) of these so I refused consent and am actively monitoring to ensure they are continuing to not use it at subsequent appointments. They are a professional so they’ve remained professional but it’s created some tension. I get it; I work in healthcare myself and I’ve seen these tools demoed and have colleagues that use them. They save a fairly substantial amount of time and in some cases they even guarantee against insurance clawbacks, which is a tremendous security advantage for a healthcare provider. But you gotta know what you’re doing and even then you gotta accept that some people simply will be against it on principle, thems the breaks

load more comments
view more: next ›