No is a full sentence.
Oh you want to explain. For those that are really interested, there are websites explaining the main points.
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Logo design credit goes to: tubbadu
No is a full sentence.
Oh you want to explain. For those that are really interested, there are websites explaining the main points.
Personally it's because the harder something is pushed to me by large corporations, the more skeptical I am to begin with.
It is your stance, you don't have to compulsively change other people's minds, let them live their lives and you live how you want. For people that are wanting to listen to you, you can tell them how you feel about AI (or perhaps specifically AI chatbots) in both subjective and objective terms. If you want to prepare research and talking points, I think the most effective thing is to have a couple examples such as the Google AI box putting out objectively wrong info with the citation links leading to sites that don't back up any claim in it. Or how the outputs of comic style image generation tend to look like knock-off Tintin and appear uninspiring and unsettling. How reading generated paragraphs, looking at images and videos of fluffy slop is simply a waste of time for you. Just mix that with all the rest of the shortcomings people have provided and you'll make for a good discussion. Remember, the point is not to change people's minds or proselytize but rather to explain why you hold your opinion.
I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.
First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.
There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.
On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.
This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true, when I started
What are some good reasons why AI is bad?
There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:
AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.
Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).
AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.
Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.
AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.
AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.
As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.
Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.
Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.
Training large AI models consumes significant energy and resources, contributing to carbon emissions.
If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.
Were you looking for this kind of reply? If you can't express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it's wrong, just that it might not be justified/objective.)
You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.
I haven't tested it, but I saw an article a little while back that you can add "don't use emdashes" to ChatGPT's custom instructions and it'll leave them out from the beginning.
It's kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it's an easy fix.