this post was submitted on 16 Aug 2025
488 points (93.7% liked)

Technology

74359 readers
2652 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] just_another_person@lemmy.world 164 points 1 week ago (17 children)

We hate it because it's not what the marketing says it is. It's a product that the rich are selling to remove the masses from the labor force, only to benefit the rich. It literally has no other productive use for society aside from this one thing.

[–] Diurnambule@jlai.lu 83 points 1 week ago (1 children)

And it falsely make people think it can replace qualified workers.

[–] Valmond@lemmy.world 39 points 1 week ago (1 children)

And it falsely makes people think it can make art.

load more comments (1 replies)
[–] serg@mastodon.au 21 points 1 week ago (2 children)

@just_another_person @corbin and it will inevitably turn into enshittified disaster when they start selling everyone's data (which is inevitable).

[–] just_another_person@lemmy.world 21 points 1 week ago
  1. they've already stolen everything
  2. other companies already focus on illegally using data for "AI" means, and they're better at it
  3. Everyone already figured out that LLMs aren't what they were promising "Assistant" features were 15 years ago
  4. None of these companies have any sort of profit model. There is no "AI" race to win, unless it's about who gets to fleece the public for their money faster.
  5. Tell me who exactly benefits when AGI is attainable (and for laymen it's not a real thing achievable with this tech at all), so who in the fuck are you expecting to benefit from this in the long run?
load more comments (1 replies)
load more comments (15 replies)
[–] KnitWit@lemmy.world 120 points 1 week ago* (last edited 1 week ago) (6 children)

Someone on bluesky reposted this image from user @yeetkunedo that I find describes (one aspect of) my disdain for AI.

collapsed inline media

Text reads: Generative Al is being marketed as a tool designed to reduce or eliminate the need for developed, cognitive skillsets. It uses the work of others to simulate human output, except that it lacks grasp of nuance, contains grievous errors, and ultimately serves the goal of human beings being neurologically weaker due to the promise of the machine being better equipped than the humans using it would ever exert the effort to be. The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.

The people that use generative Al to write things for them have no interest in writing. The people that use generative Al to find factoids have no interest in actual facts. The people that use generative Al to socialize have no interest in actual socialization.

In every case, they've handed over the cognitive load of developing a necessary, creative human skillset to a machine that promises to ease the sweat equity cost of struggle. Using generative Al is like asking a machine to lift weights on your behalf and then calling yourself a bodybuilder when it's done with the reps. You build nothing in terms of muscle, you are not stronger, you are not faster, you are not in better shape. You're just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.

[–] bulwark@lemmy.world 38 points 1 week ago (2 children)

Damn that hits the nail on the head. Especially that analogy of watching a robot lift weights on your behalf then claiming gains. It's causing brain atrophy.

[–] tehn00bi@lemmy.world 18 points 1 week ago

But that is what CEO’s want. They want to pay for a near super human to do all of the different skill sets ( hiring, firing, finance, entry level engineering, IT tickets, etc) and it looks like it is starting to work. Seems like solid engineering students graduating recently have all been struggling to land decent starting jobs. I’ll grant it’s not as simple as this explanation, but I really think the wealth class are going to be happy riding this flaming ship right down into the depths.

load more comments (1 replies)
[–] OpenStars@discuss.online 14 points 1 week ago (10 children)

Everyone who uses AI is slowly committing suicide, check ✅

[–] latenightnoir@lemmy.blahaj.zone 14 points 1 week ago* (last edited 1 week ago) (1 children)

Well, philosophical and epistemological suicide for now, but snowball it for a couple of decades and we may just reach the practical side, too...

Edit: or, hell, maybe not even decades given the increase in energy consumption with every iteration...

[–] OpenStars@discuss.online 10 points 1 week ago (2 children)

When technology allows us to do something that we could not before - like cross an ocean or fly through the sky a distance that would previously have taken years and many people dying during the journey, or save lives - then it unquestionably offers a benefit.

But when it simply eases some task, like using a car rather than horse to travel, and requires discipline to integrate into our lives in a balanced manner, then it becomes a source of potential danger that we would allow ourselves to misuse it.

Even agriculture, which allows those to eat who put forth no effort into making the food grow, or even in preparing it for consumption.

collapsed inline mediaimg

This is what CEOs are pushing on us, because for one number must go up, but also genuinely many believe they want what it has to offer, not quite having thought through what it would mean if they got it (or more to the point others did, empathy not being their strongest attribute).

load more comments (2 replies)
load more comments (9 replies)
load more comments (4 replies)
[–] RobotZap10000@feddit.nl 72 points 1 week ago (17 children)

Ed Zitron is one of the loudest opponents against the AI industry right now, and he continues to insist that "there is no real AI adoption." The real problem, apparently, is that investors are getting duped. I would invite Zitron, and anyone else who holds the opinion that demand for AI is largely fictional, to open the app store on their phone on any day of the week and look at the top free apps charts. You could also check with any teacher, student, or software developer.

collapsed inline mediaA screen showing the Top Free Apps on the Apple App Store. ChatGPT is in first place.

ChatGPT has some very impressive usage numbers, but the image tells on itself by being a free app. The conversion rate (percentage of people who start paying) is absolutely piss poor, with the very same Ed Zitron estimating it being at ~3% with 500.000.000 users. That also doesn't bode well with the fact that OpenAI still loses money even on their $200/month subscribers. People use ChatGPT because it's been spammed down their throats by the media that never question the sacred words of the executives (snake oil salesmen) that utter lunatic phrases like "AGI by 2025" (Such a quote exists somewhere, but I don't remember if this year was used). People also use ChatGPT because it's free and it's hard to say no to get someone to do your homework for you for free.

[–] Rai@lemmy.dbzer0.com 33 points 1 week ago (7 children)

I love how every single app on that list is an app I wouldn’t touch in my life

load more comments (7 replies)

I don't need chatGPT etc for work, but I've used it a few times. It is indeed a very useful product. But most of the time I can get by without it and I kinda try to avoid using it for environmental reasons. We're boiling the oceans fast enough as it is.

[–] Eagle0110@lemmy.world 9 points 1 week ago

Exactly, the users/installation count of such products are clearly a much more accurate indicator of the success of their marketing team, rather than their user's perceived value in such products lol

[–] AlecSadler@lemmy.blahaj.zone 9 points 1 week ago (4 children)

In house at my work, we've found ChatGPT to be fairly useless, too. Where Claude and Gemini seem to reign supreme.

It seems like ChatGPT is the household name, but hardly the best performing.

load more comments (4 replies)
load more comments (13 replies)
[–] Deflated0ne@lemmy.world 64 points 6 days ago (35 children)

It's extremely wasteful. Inefficient to the extreme on both electricity and water. It's being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.

It's cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?

What's not to hate?

load more comments (35 replies)
[–] NoodlePoint@lemmy.world 61 points 6 days ago (4 children)
  1. It's theft to digital artisans, as AI-generated works tend to derive heavily without even due credit.
  2. It further discourages what's called critical thinking.
  3. It's putting even technically competent people out of work.
  4. It's grift for and by techbros.
[–] Soup@lemmy.world 19 points 6 days ago (3 children)

Numver 3 is crazy too because it’s putting people out of work even when it’s worse than them, the bubble bursting will have dire consequences and if it’s held together by corrupt injections of taxpayer money then it’ll still have awful consequences, and the whole point of AI doing our jobs was to free us from labour but instead the lack of jobs is only hurting people.

load more comments (3 replies)
load more comments (3 replies)
[–] Binturong@lemmy.ca 59 points 6 days ago (3 children)

The reason we hate AI is cause it's not for us. It's developed and controlled by people who want to control us better. It is a tool to benefit capital, and capital always extracts from labour, AI only increases the efficiency of exploitation because that's what it's for. If we had open sourced public AI development geared toward better delivering social services and managing programs to help people as a whole, we would like it more. Also none of this LLM shit is actually AI, that's all branding and marketing manipulation, just a reminder.

load more comments (3 replies)
[–] TheObviousSolution@lemmy.ca 41 points 6 days ago (1 children)

It's corporate controlled, it's a way to manipulate our perception, it's all appearance no substance, it's an excuse to hide incompetence under an algorithm, it's cloud service orientated, it's output is highly unreliable yet hard to argue against to the uninformed. Seems about right.

[–] Taleya@aussie.zone 14 points 6 days ago

And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.

[–] MehBlah@lemmy.world 38 points 1 week ago (1 children)

I don't hate AI. I'm just waiting for it. Its not like this shit we have now is intelligent.

[–] Diurnambule@jlai.lu 19 points 1 week ago (3 children)

Yeah I hate that is is used for llm, when we tell ia I see Jarvis from iron man not a text generator.

[–] FaceDeer@fedia.io 22 points 1 week ago (5 children)

The term "AI" was established in 1956 at the Dartmouth workshop and covers a very broad range of topics in computer science. It definitely encompasses large language models.

load more comments (5 replies)
load more comments (2 replies)
[–] Brotha_Jaufrey@lemmy.world 34 points 6 days ago (1 children)

There was a thread of people pointing out biases that exist on Lemmy, and some commenters obviously mention anti-AI people. Cue the superiority complex (cringe).

Some of these people actually believe UBI will become a thing for people who lose their jobs due to AI, meanwhile the billionaire class is actively REMOVING benefits for the poor to further enrich themselves.

What really gets me is when people KNOW what the hell we’re talking about, but then mention the 1% use case scenario where AI is actually useful (for STEM) and act like that’s what we’re targeting. Like no, motherfucker. We’re talking about the AI that’s RIGHT IN FRONT OF US, contributing to a future where we’re all braindead ai-slop dependent, talentless husks of human beings. Not to mention unemployed now.

[–] CancerMancer@sh.itjust.works 12 points 6 days ago (2 children)

A system is what it does. If it costs us jobs, enriches the wealthy at our expense, destroys creativity and independent thought, and suppresses wrongthink? It's a censorious authoritarian fascist pushing austerity.

Show me AI getting us UBI or creating worker-owned industry and I'll change my tune.

load more comments (2 replies)
[–] Kinperor@lemmy.ca 29 points 5 days ago

I skimmed the article, I might have missed it but here's another strike against AI, that is tremendously important: It's the ultimate accountability killer.

Did your insurance company make an obvious mistake? Oops teeehee, silly them, the AI was a bit off

Is everything going mildly OK? Of course! The AI is deciding who gets insurance and who doesn't, it knows better, so why are you questioning it?

Expect (and rage against) a lot of pernicious usage of AI for decision making, especially in areas where they shouldn't be making decisions (take Israel for instance, that uses an AI to select ""military"" targets in Gaza).

[–] SunshineJogger@feddit.org 23 points 6 days ago* (last edited 6 days ago) (11 children)

It's actually a useful tool..... If it were not too often used for so very dystopian purposes.

But it's not just AI. All services, systems, etc... So many are just money grabs, hate, opinion making or general manipulation.... I have many things I hate more about "modern" society, than I do as to how LLMs are used.

I like the lemmy mindset far more than reddit and only on the AI topic people here are brainlessly focused on the tool instead of the people using the tool.

load more comments (11 replies)
[–] Tracaine@lemmy.world 23 points 1 week ago (18 children)

I don't hate AI. AI didn't do anything. The people who use it wrong are the ones I hate. You don't sue the knife that stabbed you in court, it was the human behind it that was the problem.

[–] AmbitiousProcess@piefed.social 23 points 1 week ago (1 children)

While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it "wrong" more often than not.

Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.

The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don't care about others, it's that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.

It's like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you'd always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.

Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.

We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI's models was on "OpenAI Playground," where you'd have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn't word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.

Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was "chatting" with them in an interface similar to how they might text with a friend.

And now, ChatGPT's homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. "deep research") and people have begun treating it more like a source of truth rather than just a thing talking to them.

And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.

The tools might not be "evil" on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.

[–] HarkMahlberg@kbin.earth 10 points 1 week ago

This is fantastic description of Dark Patterns. Basically all the major AI products people use today are rife with them, but in insidiously subtle ways. Your point about minimal UX is a great example. Just because the interface is minimal does not mean it should be, and OpenAI ditched their slider-driven interface even though it gave the user far more control over the product.

load more comments (17 replies)
[–] RedIce25@lemmy.world 17 points 1 week ago

Leave my boy Wheatley out of this

[–] bridgeenjoyer@sh.itjust.works 13 points 5 days ago (2 children)

Ai is the smart fridge of computing.

load more comments (2 replies)
[–] roserose56@lemmy.ca 12 points 5 days ago

Its an unfinished product with various problems, used in humans to develop it and make money.

It does nothing right 100%! We as humanity care to make money out of it, and not help humanity in many ways.

load more comments
view more: next ›