I disliked it from the moment they called a glorified language database and collator "AI."
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
The term AI has been poorly defined for a long time, technically ChatGPT and a chess bot probably count as "AI", but of course most people outside CS know AI to mean a sentient computer or something similar. IMO this made the AI marketing hype 1000x worse
It's even worse if you're in data security and auditing a supplier - people advertise "AI features" without much description and you have to contact them to see if it's just a fancy algorithm they've actually had for ten years or a scanner that sends everything to some random LLM in the USA
I like non-generative AI. Early artificial life sims (e.g cellular automata) are super interesting, and machine learning and xAI are great for science.
Just not that big a fan of the infinite slop machine helping the rich get richer at the cost of degrading our knowledge base and arts
AI reviewing medical imaging/labs and flagging abnormal findings for review by a doctor is awesome.
AI spamming social media to push fascist propaganda is not.
Your second paragraph is spot on. Exactly what is happening.
I’m a musician, and I like to write. I hate it. My husband uses it to help him code (I think; I have zero functional knowledge of his industry) and says it can be used in that capacity in certain ways.
The number of times I’ve heard peers say “I put this into chat gpt and it said…” makes me want to throw up. We think disinformation is bad now? We have no idea what’s coming.
The amount of people who use chatgpt like google is crazy, maybe it's because I was there before the "chat" phase of LLMs, but I can't imagine taking the output of a statistical prediction model as fact
Using google is now like using ChatGPT, FYI.
oh boy, that is awful. I've not used actual google for a while, I suppose I should have said "search engine" instead
It’s pretty garbage. I don’t trust the AI summary whatsoever. Just today I was trying to find out more information about something in a YouTube video, and it tried to tell me that no such thing existed when that’s what the YouTube video was literally documenting.
I know I need to de-google my life but I also know that it will take work and, well, inertia.
AI is literally banning YouTube creators who have been active for years for no reason.
AI can do great things when it's fed by actual people for a particular fairly narrow purpose, but that's it.
I think that's been around since before the slop machines. I got banned from Instagram once because I called someone a "racist prick" (they were being racist)
I've not loved nor hated AI. I've been both impressed and irritated by what it can do.
But, one thing has been pissing me off lately. My wife and I often share funny videos throughout the day, and over the past few weeks, about 1/3 of them have been real video, 1/3 of them have been obviously AI (like talking babies), and the other third are deceptive AI. (Something that looks impressive yet believable, until you see the SORA watermark, or find an inconsistency in the background)
There was a time where I could watch a video of a dog doing tricks and just think "that's adorable!" but now I have to check everything I watch for watermarks, missing teeth, and scrambled text in the background. I have to verify it's authenticity before I can decide how I feel about it.
I like when AI is a place you can go to try something out or to experiment with something.
I like AI as a toy.
I hate AI that is crammed into something or some place that I don't want it to be.
I don't want AI-generated summaries of a web search.
I don't want AI-generated articles.
I don't want AI crammed into every surface of my computer.
What I want is people to use AI to help solidify an idea while it's in the experimental brainstorming stage, and then to take those things that they've made with the AI, and then turn them into a real human-generated thought process-aligned output, be that an anime, or a story, or a web article that ultimately uses as little of the original AI-generated thing as possible.
It's a tool, not an artist. It's a tool, not a writer.
It's a tool. Use it as a tool.
I agree the most with that you called it a toy. It's fun to play with.
In very limited cases, it can be a tool - but I've asked GPT5 to summarize complex policy documents that I know inside and out and it gets a huge amount wrong or just makes things up.
It's getting shoehorned into business when it is nowhere even close to the functionality and accuracy it needs in that space.
And worst of all, it's utterly destroying the web. Half of what I find in search results these days is AI slop with that baby's-first-essay writing style and weasel words aplenty.
It has a few applications in small, targeted tasks, but on balance I think businesses are vastly overestimating its utility as a productivity tool.
I loved the image generation for about 20 minutes.
There's a band called Celldweller that I really jammed to in 2018. They got kind of a cyberpunk/robot theme going on that I really liked (it's just an aesthetic tho, they obviously don't actually use any AI).
Anyways, they had a song called Pro-bots & Robophobes and when I heard it in 2018 I thought that I'd obviously be part of the Pro-bots. Haha, when actual AI came along I found out I was wrong on that assessment. I don't think there has been a single explicitly AI thing that I have actually enjoyed. Even the AI code completion that all my coworkers really like has been nothing but a hassle, because it keeps hallucinating up very slight mistakes that are annoying to catch.
I'm a musician, an artist, and I was even started to get into voice acting so I have hated generative AI from the beginning. These are all things that are insanely difficult to get jobs in and money from and now it is basically impossible. Don't get me wrong, I think there are genuinely good uses for AI when it comes to science and medicine and I am a strong advocate of those kinds of AI. The problem with AI is that it is being developed in a purposefully disruptive and malicious way. They're not innovating for the sake of innovation, they are innovating for the sake of making more money and generally making peoples lives worse.
They could make an AI model that artists can use to help in the creative process by making some of the more tedious aspects easier, but they'd rather steal everyone elses art and replace you. They could make writing easier by developing tools that can make putting your thoughts onto paper a more seemless experience that can open the door to those that may struggle with literacy and dyslexia, but they'd rather steal everything you write and replace you. They could make AI tools that can make making music easier by creating tools that make the post processing part easy and basically automated like how Autotune works, but they'd rather steal everyone's music and make their own as they replace you.
We've developed the greatest technology of the past decade and put it into the hands of people that do not care about us and only want to suck as much money out of our pockets as much as possible and manipulate us against each other while we all lose our jobs to the tools that were supposed to make our jobs easier. And now it's to the point in the US that these companies have invested so much money and infrastructure into it that backing down is not an option anymore without collapsing the entire economy. And all the while they are destroying neighborhoods and peoples lives with their data centers just to maybe one day possibly achieve AGI if it's even possible with current AI architecture, which it probably isn't. When OpenAI is prioritizing generated videos and being able to have sex with ChatGPT, that should be a clear sign that they have lost the plot entirely.
Just to be clear this is all aimed at generative AI. AI is an amazing technology that can do a lot of amazing things. The problem is that the companies making generative AI are actively using them to make peoples lives worse just for the sake of money and are barely even hiding it. And yeah I know, that's capitalism baby. But this feels different. They aren't competing against each other, they are competing against us, our rights, our freedom, our privacy, and our money. It feels like the corporations stopped fighting each other and turned all their fighting spirit onto the general population.
Generative AI was vaguely funny when it created trippy, acid hallucination images and incoherent druggy ramblings of text. I know an author who fed their own content into an early LLM (small language model?) and the bizarre, yet undeniably "his" stuff it produced was worth a laugh. I wouldn't say I "liked" it, but it was kind of amusingly quirky.
What was depressing is how quickly people began to claim AI content was "theirs". As someone who ran a fiction-creating community, people were so eager to latch on to what AI would spit out that they began to create convoluted things for the early models to "depict".
When the money and energy sucking slop that currently is being billed as "AI" is what tech bros are using to rob us now. It is garbage. It will remain garbage. It will suck as much money as possible into the tech bros companies and waste energy to the Nth degree while doing it. It is a the biggest con / scam of this century.
Actual AI for scientific research I'm OK with.
But AI shit crammed into literally everything. No sir, I hate it sir.
I dont work in IT so I may have misconceptions so I'm open to being corrected. What I dont understand is general AI/LLM usage. One place I frequent, one guy literally answers every post with "Gemini says...". People just dont seem to bother thinking any more. AI/LLM doesnt seem to offer any advantage over traditional search & you constantly gave to fact check it. Garbage in, garbage out. Soon it'll start learning from its own incorrect hallucinations & we won't be able to tell right from wrong.
I have succumbed a couple of times. One time it actually helped (I'm not a coder, it was a coding related question which it did help with). With a self host/Linux permissions question it fucked up so badly I actually lost access to an external drive. Im no expert with linux, Im learning & managed to resolve it myself.
AI answers have been blocked from DDG on all my devices.
This feels like how Bitcoin was neat for a time, before 99% of the space became pyramid schemes.
…And if “AI” is on the same trajectory as crypto, that’s not great, heh…
I don't hate AI. It can be quite helpful when I go to where IT lives and ask it a question sometimes.
What I don't want is that AI to have unlimited access to my devices and just be a 'thing' that is constantly watching me in the background.
It's like if my neighbour is a mathematician, and I'm having trouble figuring out a complex equation. It's very helpful that I can go next door and knock on his door to ask him. But that doesnt mean I want him sitting in my house forever looking over my shoulder.
I was initially impressed when ChatGPT and Midjourney came out and I was playing around with them. But the novelty quickly wore off, and as I learned more about the flaws in how they operate and the negative environmental effects, the more I came to dislike it. Now, I actively hate and avoid AI.
I liked it back when I was just making awful-looking images on DALLE just to laugh at them and share them with friends who also thought they were goofy. Now? I don't like AI.
Fuck genAI. There are other forms of machine learning that are cool tho. Classification alghorithms for example.
I am not against AI but I am against how it will be used. Bit of back story I have worked with "AI" type of learning software for the last decade and have seen how business evolve the software.
Great example is working with UCaaS and CCaaS, phone systems if you aren't familiar. There is a little function known as metrics that providers can use to score agents and users calling in. By itself this is great knowledge to have as you can isolate what what is commonly said, build frequently asked questions for example or have an AI that can tell you those. However in the last five years or so I have seen this technology being used to abuse workers and not provide help to customers. These same metrics can now measure voice levels, frustration, activity level to extreme detail, verbiage, emotions, etc. It can build a spreadsheet report on who is a good worker vs bad. Now what it doesn't consider is if the employee is having a bad day, going through a divorce, or something terrible that may be affecting them. Tech doesn't care and will flag you as a bad employee if you drop below a threshold. And it remembers.
On the flip I have seen a demo of AI being used for emergency routing. Say someone calls with their cell to report a fire. The same AI tool with the right integrations can look where the caller is calling from. Pull up cctv or activate police cameras near. Identify if other calls in the immediate area are reporting issues. An example I saw within a minute of calling in the system would be able to provide all the info that the dispatcher needs to know even if the caller couldn't tell them where. This tech could help save hundreds of thousands from accidents to domestic abuse.
What it boils down to is not the technology itself but WHO and HOW it will be used. So for all these people willy neely installing AI on every device because it is free look at recent example of tech being obsolete once it is no use to companies. You are being fed slop to make you comfortable with AI having access to everything, even intimate moments in your house. In the next decade when companies change how they are using AI it will be to late to claw back those freedoms.
Look up what is happening to Nest and Ring right now. Companies want to earn your trust with cheap products and fun but your best interest is not at the top of that list.
Last thought with AI. Just like with how Amazon took over is what is happening with AI now.
These products aren't for you. They are being used to make you the product by eroding your skepticism and building blind trust.
I like "AI" like I like plastic. Technologically it's amazing! It allows us to do many things that would have been impossible or very difficult before!It has many different uses! Average people use it terribly! Business people use it for awful things! It is poisoning our reality in a pervasive manner that will persist long after I am dead!
So yes and when it became overly marketable.
I also really enjoyed when "AI" models made distinctly uncanny images rather than "realistic" soulless images. I have an image of a weird-looking baby and a chicken-man screaming at each other and it is distinctly hyper-perceptual. Good times! (update: I found it)
I use to watch neural learning videos on YouTube and found the work they were doing really interesting and over all good for computers. Then I saw OpenAI come out with their LLMs and chat bots and I wasn't really a fan of them but I used them early on to get some familiarity with it and didn't like that I had to Google everything and then feed it to the AI for it to get a somewhat decent answer. I don't like that everyone is turning to AI to generate advertising and make ai slop memes. Also my sister had a psychotic episode after the bot affirmed her deepest insecurities and turned her against her family and friends. So like the sooner we run out of power to feed these monsters the better
I liked AI in Star Trek when it could carry on conversations with crew members.
Then I didn’t like it when it washed war on humanity in Terminator 2.
I see the promise in AI. I also see it encouraging suicide, grooming young people, and stealing from artists. I cannot support any of that.
Machine learning software in general: great. A really useful technology for getting a generally good answer where programming a perfect answer isn't possible.
Generative software like LLMs or image generation: I think they potentially have positive uses, and could end up being a positive thing overall.
The main problem is the current companies. They're putting this software where it shouldn't be, dragging in huge amounts of power and water for their data centers, and encouraging people to use their product to spread disinformation and replace their brains in general, all in the name of getting money from investors.
As often happens, the problem isn't a specific technology. The problem is capitalism.
I briefly toyed with some of the early image generation stuff. It was a fun toy for making NPCs for RPGs.
But now it's everywhere and being used as an excuse to squeeze labor harder and deliver dubious value. If it just stayed as a toy I wouldn't mind it much. I get annoyed at the aggressive "do you want me to rewrite that for you??" shit that pops up now.
I was amused by the idea initially. I am in favor of seeing ACTUAL AI someday for no other reason than it would be proof that humans are gods who can create sapience without genetics. I even generated my avatar with AI.
But seeing how it wasn't much better than chatbots before it, how it does not know anything and uses no intelligence to give you factual responses with 100% accuracy to even be good for information, and how it is taking jobs away from creatives in ways not even fiction thought would happen has made me hate what is currently being called "AI."
I liked it when it first hit the scene and still like things about it (more specifically, the potential it could have if implemented correctly), but my distaste for it started when I learned how terrible it is for the earth and which people seem to be the leading the charge with it.
I liked playing with image generators for like a month before learning of these things. Hardly knew thee...
The history of the field has made important contributions to how modern computing works. Optimizing compilers, multitasking, and virtual machines all directly comes out of work that started with AI. Even if you don't use all of these directly, you benefit from them just by using a computer built after 1980.
If you're interested, I'd recommend Steven Levy's "Hackers", particularly the first two sections that are about MIT CSAIL. The third section is about the Sierra games studio, which has its own historical interest, but not really relevant here (and for various reasons, that part of the book hasn't aged as well, IMO).
I don't like the part of the field that has been weaponized against the working class. Which is almost everything that gets headlines right now. There are still good researchers doing good work who should be praised. They're just not the ones "publishing papers" on Anthropic's web site.
Image generation is fun, and LLMs can be a great way to find a starting point for learning something already known by humanity as a whole, but not known by one in particular. Because they are statistical association machines, they are practically perfect for answering the 'what word am I looking for?' question when you can only 'talk around' the concept.
However, that's not what they are being used for, and the user cost does not match the externalized cost. If users had to pay the real cost today, the AI companies would die tomorrow. (This is probably true of a great many companies but we're talking AI ones here.)
One of the concepts I keep returning to is 'X was cool, but then the idiots got it.' Early internet? Absolute nerdity; the only people on there were highly educated, usually intelligent as well, and the new people came at a pace the community could absorb. Then the idiots came, including business majors, kids, and eventually just everyone. Early mass media? Libraries of printed books. It was still expensive, so no one bothered making and distributing 3,000,000 copies of Ted from the pub's musings on redheads, but as it became cheaper, and eventually even cheaper in electronic form, gates were no longer kept, and the idiots got in.
In this same way, AI in the form of statistical analysis tools has always been fascinating, and kind of cool. AI assisted radiology is great. Data analysis tools are great. But the idiots have the controls now, and they're using them to put shrimp Jesus on their deep fake pizza, at the top of GPT-generated 'articles,' and we're all paying the price for their fun in the form of uncountable subsidies, environmental damage, and societal damage.
Generative AI always felt soulless to me, the only creative thing I use it for is RP since the days of dungeon AI before chat gpt was a thing.
I've been a luddite since long before AI. AI is black box engineering. It's a shield they can and do use to create a malicious product.
As you pointed out, the most obvious use case is reducing cost of labor regardless of whether total labor is reduced.
AI has been absolutely great for a bunch of things for me.
Breaking free from exploitative companies has been so great for me and I wouldn't have been able to do it at all without AI. I wouldn't have been able to switch to Linux or get through the troubleshooting and problem solving without it. I wouldn't be able to set up my home server or my self hosted services without AI. AI regularly helps me find how to put a formula into an excel file, or write an initial draft outline of some tedious writeup. AI helps give suggestions for travel itinerary or possible shopping choices to start researching a purchase.
AI is perfect as an assistant in scenarios where I do not care at all of it is completely wrong. I wouldn't havr been able to do some of these things that have been so important to me. I've also been able to get my parents to find help themselves for simple tech problems.
My problem with AI is all the other shit around it. Unethical training. Replacing people's jobs. Unsustainable deployment with the power and water use. It's impossible to find human generated reviews online. I wouldn't mind at all of AI disappeared completely tomorrow. But I've been able to get a lot of good use out of it.
I wish they would quit calling it AI. It's not.
Liked it while I was working on the technology side figuring out MLOps in a company. Well those were technically ML models but same thing. I also did a small pet project creating an LLM from scratch, that was fun.
But I see little appealing as a user. I sometimes use it as a glorified search engine, and admittedly can be good at that. Mostly because Google search went to shit. Generated images, songs, videos ... so pointless and fake. There is nothing human behind it, I can't possibly enjoy that content. Talking about "serious" AI "art". Jokes are a different thing, brainrot memes are kinda good.
Part of my job means i have to deal with complaints from individuals who are, lets say, fucking dumb.
Being able to put their unhinged 2 page long, zero punctuation rant into copilot and ask it to summarise the key points, giving me something tangible that I can follow up on saves me a huge amount of time and work.
But ai slop, fkn hate it
I didn't like the idea of generative AI ever, even less when it was marketed as a replacement for creativity. But I DID use it once, tryna see if I could do that thing some people did to manipulate it into getting me free game codes. It never worked
I think it's great for when you're looking for answers and need follow up questions (when it gives you accurate answers). It's been great at work when I've been stuck on some technical stuff.
But at a consumer level, I think that's as far as it should go. Image and video generation, in my opinion, shouldn't be a thing for the public. It's a too higher cost in terms of environment and resource usage just to have slop videos that give us a chuckle for our entertainment. That should be reserved for proper things like scientific and medical research - at least that would eventually have a net benefit to society I suppose.
No
When I first saw the game "AI dungeon" it blew my mind. It actually used gpt-2.
Anyway the game later had to get worse to manage cost of servers and seeing them struggle to make it work was just sad to watch.
I remember there being youtube essays going over how this is one of the reasons free AI services open to public are just unmaintainable but then chatgpt/gpt-3 came out and that whole topic was closed to look at this shiny new thing instead.
