Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:
1. Bias and unfair decisions
AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.
2. Lack of transparency
Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).
3. Privacy risks
AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.
4. Job displacement
Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.
5. Misinformation and deepfakes
AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.
6. Weaponization
AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.
7. Overreliance and loss of human skills
As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.
8. Concentration of power
Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.
9. Alignment and control risks
Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.
10. Environmental impact
Training large AI models consumes significant energy and resources, contributing to carbon emissions.
If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.
Were you looking for this kind of reply? If you can't express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it's wrong, just that it might not be justified/objective.)
Please, for the love of god, tell me you didn't write that post with AI, because it really looks like that was written with AI.
Except the first phrase and last paragraph, it was AI. Honestly, it feels like OP is taunting us with such a vague question. We don't even know why they dislike AI.
I'm not an AI lover. It has its place and it's a genuine step forward. Less than what most proponents think it's worth, more than what detractors do.
I only use it myself for documentation on the framework I program in, and it's reasonably good for that, letting me extract more info quicker than reading through it. Otherwise haven't used it much.
“Good catch! I did make that up. I haven’t been able to parse your framework documentation yet”
My question was genuine. I haven't been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.
All that being said, it is not easy for me to communicate these points clearly to someone the way I have experienced it. It's like the case for informing people about privacy; casual users aren't inherently aware of the consequences of using this tool and consider it a godsend. It will be difficult for them to convince that the tool they cherish to use so much is not that great after all, thus I am asking here what the beat approach should be.
Isn't that exactly the answer you are looking for?
The "environmental destruction" angle is likely to cause trouble because it's objectively debatable, and often presented in overblown or deceptive ways.
You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.
I haven't tested it, but I saw an article a little while back that you can add "don't use emdashes" to ChatGPT's custom instructions and it'll leave them out from the beginning.
It's kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it's an easy fix.