this post was submitted on 24 Nov 2025
199 points (95.0% liked)

Ask Lemmy

35688 readers
1784 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I want to let people know why I'm strictly against using AI in everything I do without sounding like an 'AI vegan', especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

(page 3) 30 comments
sorted by: hot top controversial new old
[–] Strider@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

No is a full sentence.

Oh you want to explain. For those that are really interested, there are websites explaining the main points.

[–] Rentlar@lemmy.ca 1 points 1 day ago

Personally it's because the harder something is pushed to me by large corporations, the more skeptical I am to begin with.

It is your stance, you don't have to compulsively change other people's minds, let them live their lives and you live how you want. For people that are wanting to listen to you, you can tell them how you feel about AI (or perhaps specifically AI chatbots) in both subjective and objective terms. If you want to prepare research and talking points, I think the most effective thing is to have a couple examples such as the Google AI box putting out objectively wrong info with the citation links leading to sites that don't back up any claim in it. Or how the outputs of comic style image generation tend to look like knock-off Tintin and appear uninspiring and unsettling. How reading generated paragraphs, looking at images and videos of fluffy slop is simply a waste of time for you. Just mix that with all the rest of the shortcomings people have provided and you'll make for a good discussion. Remember, the point is not to change people's minds or proselytize but rather to explain why you hold your opinion.

[–] solomonschuler@lemmy.zip 0 points 1 day ago

I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true, when I started

[–] NoSpotOfGround@lemmy.world -2 points 3 days ago (7 children)

What are some good reasons why AI is bad?

There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

1. Bias and unfair decisions

AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

2. Lack of transparency

Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

3. Privacy risks

AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

4. Job displacement

Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

5. Misinformation and deepfakes

AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

6. Weaponization

AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

7. Overreliance and loss of human skills

As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

8. Concentration of power

Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

9. Alignment and control risks

Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

10. Environmental impact

Training large AI models consumes significant energy and resources, contributing to carbon emissions.


If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

Were you looking for this kind of reply? If you can't express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it's wrong, just that it might not be justified/objective.)

[–] AmidFuror@fedia.io 1 points 3 days ago (1 children)

You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

[–] FaceDeer@fedia.io 0 points 3 days ago

I haven't tested it, but I saw an article a little while back that you can add "don't use emdashes" to ChatGPT's custom instructions and it'll leave them out from the beginning.

It's kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it's an easy fix.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›