this post was submitted on 23 Mar 2025
1239 points (98.7% liked)

Technology

67536 readers
4815 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] aramis87@fedia.io 154 points 4 days ago (10 children)

The biggest problem with AI is that they're illegally harvesting everything they can possibly get their hands on to feed it, they're forcing it into places where people have explicitly said they don't want it, and they're sucking up massive amounts of energy AMD water to create it, undoing everyone else's progress in reducing energy use, and raising prices for everyone else at the same time.

Oh, and it also hallucinates.

[–] pennomi@lemmy.world 29 points 4 days ago (3 children)

Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.

Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.

[–] catloaf@lemm.ee 72 points 4 days ago (1 children)

So far, the result seems to be "it's okay when they do it"

load more comments (1 replies)
[–] Electricblush@lemmy.world 33 points 4 days ago* (last edited 4 days ago) (1 children)

I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of "normies") are not also aggressively welding copyright against the same people they are stealing from.

With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.

Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.

The world needs something a lot more drastic then a copyright reform at this point.

load more comments (1 replies)
[–] naught@sh.itjust.works 12 points 4 days ago (3 children)

AI scrapers illegally harvesting data are destroying smaller and open source projects. Copyright law is not the only victim

https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/

load more comments (3 replies)
[–] wewbull@feddit.uk 13 points 4 days ago

Oh, and it also hallucinates.

Oh, and people believe the hallucinations.

[–] riskable@programming.dev 11 points 4 days ago* (last edited 4 days ago) (5 children)

They're not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you're breaking the law the truth is that you're not. It's only when you distribute said copy that you're breaking the law (aka violating copyright).

All those old school notices (e.g. "FBI Warning") are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can't share it (or show it to more than a handful of people but that's a different set of laws regarding broadcasting).

I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they'd have to be including it all in those files somehow.

Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person's interpretation of what Godzilla combined with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could--in theory--get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn't meant the AI model literally copied his works. It's just doing it's best guess (it's literally guessing! That's how work!).

[–] natecox@programming.dev 10 points 4 days ago (1 children)

The problem with being like… super pedantic about definitions, is that you often miss the forest for the trees.

Illegal or not, seems pretty obvious to me that people saying illegal in this thread and others probably mean “unethically”… which is pretty clearly true.

[–] riskable@programming.dev 7 points 4 days ago* (last edited 4 days ago) (2 children)

I wasn't being pedantic. It's a very fucking important distinction.

If you want to say "unethical" you say that. Law is an orthogonal concept to ethics. As anyone who's studied the history of racism and sexism would understand.

Furthermore, it's not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that's considered unethical. If it has no impact or positive impact that's an ethical behavior.

What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans--they were read by machines.

From an ethics standpoint that behavior is moot. It's the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that's largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.

It is not the copying of this information that matters. It's the impact of the technologies they're creating with it!

That's why I think it's very important to point out that copyright violation isn't the problem in these threads. It's a path that leads nowhere.

load more comments (2 replies)
[–] Gerudo@lemm.ee 7 points 4 days ago (5 children)

The issue I see is that they are using the copyrighted data, then making money off that data.

load more comments (5 replies)
load more comments (3 replies)
[–] Sl00k@programming.dev 8 points 4 days ago (1 children)

I see the "AI is using up massive amounts of water" being proclaimed everywhere lately, however I do not understand it, do you have a source?

My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to "burn off" water for cooling.

[–] lime@feddit.nu 11 points 4 days ago* (last edited 4 days ago) (2 children)

data centers are mainly air-cooled, and two innovations contribute to the water waste.

the first one was "free cooling", where instead of using a heat exchanger loop you just blow (filtered) outside air directly over the servers and out again, meaning you don't have to "get rid" of waste heat, you just blow it right out.

the second one was increasing the moisture content of the air on the way in with what is basically giant carburettors in the air stream. the wetter the air, the more heat it can take from the servers.

so basically we now have data centers designed like cloud machines.

Edit: Also, apparently the water they use becomes contaminated and they use mainly potable water. here's a paper on it

load more comments (2 replies)
[–] Sturgist@lemmy.ca 7 points 4 days ago (6 children)

Oh, and it also hallucinates.

This is arguably a feature depending on how you use it. I'm absolutely not an AI acolyte. It's highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn't necessarily be an issue if people who aren't tech broligarchs weren't routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren't being fucked....just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

If you're using AI to make music, images or video... you're depending on those hallucinations.
I run a Stable Diffusion model on my laptop. It's kinda neat. I don't make things for a profit, and now that I've played with it a bit I'll likely delete it soon. I think there's room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal....

load more comments (6 replies)
load more comments (5 replies)
[–] kibiz0r@midwest.social 34 points 4 days ago (4 children)

Idk if it’s the biggest problem, but it’s probably top three.

Other problems could include:

  • Power usage
  • Adding noise to our communication channels
  • AGI fears if you buy that (I don’t personally)
[–] pennomi@lemmy.world 18 points 4 days ago (1 children)

Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.

[–] Fingolfinz@lemmy.world 7 points 4 days ago

We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation

load more comments (3 replies)
[–] Polderviking@feddit.nl 33 points 3 days ago* (last edited 3 days ago) (11 children)

That it's controlled by a few is only a problem if you use it... my issue with it starts before that.

My biggest gripe with AI is the same problem I have with anything crypto: It's out of control power consumption relative to the problem it solves or purpose it serves. And by extension the fact nobody with any kind of real political power is addressing this.

Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything, all in the name of the environment, while at the same time in some datacenter they are burning kwh's by the bucket loads generating pictures of cats in space suits.

load more comments (11 replies)
[–] ElPussyKangaroo@lemmy.world 24 points 4 days ago

Truer words have never been said.

[–] DarkCloud@lemmy.world 20 points 4 days ago* (last edited 4 days ago) (7 children)

Like Sam Altman who invests in Prospera, a private "Start-up City" in Honduras where the board of directors pick and choose which laws apply to them!

The switch to Techno-Feudalism is progressing far too much for my liking.

load more comments (7 replies)
[–] MyOpinion@lemm.ee 20 points 4 days ago (3 children)

The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

[–] lobut@lemmy.ca 23 points 4 days ago

I mean, it's our work the result should belong to the people.

[–] piecat@lemmy.world 8 points 4 days ago (3 children)

This is where "universal basic income" comes into play

load more comments (3 replies)
load more comments (1 replies)
[–] Grimy@lemmy.world 18 points 4 days ago (1 children)

AI has a vibrant open source scene and is definitely not owned by a few people.

A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It's a shame to see so many actually cheering them on.

load more comments (1 replies)
[–] futatorius@lemm.ee 17 points 3 days ago (2 children)

Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.

[–] finitebanjo@lemmy.world 10 points 3 days ago* (last edited 3 days ago) (1 children)

And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2022, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.

So yeah. Lots of problems.

load more comments (1 replies)
load more comments (1 replies)
[–] AbsoluteChicagoDog@lemm.ee 16 points 3 days ago

Same as always. There is no technology capitalism can't corrupt

[–] RadicalEagle@lemmy.world 16 points 4 days ago (2 children)

I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.

[–] protist@mander.xyz 15 points 4 days ago

Welcome to every technological advancement ever applied to the workforce

load more comments (1 replies)
[–] TheMightyCat@lemm.ee 12 points 4 days ago (3 children)

No?

Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

[–] CodeInvasion@sh.itjust.works 7 points 4 days ago (3 children)

Yah, I'm an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

It's really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I'm working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it's the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

load more comments (3 replies)
[–] nalinna@lemmy.world 5 points 4 days ago (4 children)

But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.

[–] riskable@programming.dev 7 points 4 days ago (1 children)

This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

I've downloaded several academic models and all commercial models and AI tools are based on all that public research.

I run AI models locally on my PC and you can too.

load more comments (1 replies)
load more comments (3 replies)
load more comments (1 replies)
[–] WrenFeathers@lemmy.world 11 points 4 days ago (1 children)

The biggest problem with AI is the damage it’s doing to human culture.

load more comments (1 replies)
[–] PostiveNoise@kbin.melroy.org 11 points 4 days ago (1 children)

Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.

[–] andros_rex@lemmy.world 5 points 4 days ago* (last edited 4 days ago) (5 children)

Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.

There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.

load more comments (5 replies)
[–] captain_aggravated@sh.itjust.works 11 points 3 days ago (3 children)

For some reason the megacorps have got LLMs on the brain, and they're the worst "AI" I've seen. There are other types of AI that are actually impressive, but the "writes a thing that looks like it might be the answer" machine is way less useful than they think it is.

load more comments (3 replies)
[–] umbraroze@lemmy.world 11 points 3 days ago (1 children)

AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want ("fuck the copyright, especially fuck the natural resources") who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.

I don't have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.

load more comments (1 replies)
[–] max_dryzen@mander.xyz 11 points 3 days ago (1 children)

The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)

load more comments (1 replies)
[–] Grandwolf319@sh.itjust.works 10 points 4 days ago (1 children)

The biggest problem with AI is that it’s the brut force solution to complex problems.

Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.

Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.

It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.

load more comments (1 replies)
[–] RememberTheApollo_@lemmy.world 9 points 4 days ago (2 children)

And those people want to use AI to extract money and to lay off people in order to make more money.

That’s “guns don’t kill people” logic.

Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.

[–] magic_smoke@lemmy.blahaj.zone 12 points 4 days ago

The real issues are capitalism and the lack of green energy.

If the arts where well funded, if people where given healthcare and UBI, if we had, at the very least, switched to nuclear like we should've decades ago, we wouldn't be here.

The issue isn't a piece of software.

load more comments (1 replies)
[–] Guns0rWeD13@lemmy.world 8 points 3 days ago (2 children)

brian eno is cooler than most of you can ever hope to be.

load more comments (2 replies)
[–] iAvicenna@lemmy.world 6 points 4 days ago

like most of money

load more comments
view more: next ›