this post was submitted on 18 May 2025
243 points (95.2% liked)

Ask Lemmy

31802 readers
1043 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Adderbox76@lemmy.ca 6 points 2 days ago

I don't have negative sentiments towards A.I. I have negative sentiments towards the uses it's being put towards.

There are places where A.I can be super exciting and useful; namely places where the ability to quickly and accurately process large amounts of data can be critically life saving, ie) air traffic control, language translation, emergency response preparedness, etc...

But right now it's being used to paint shitty pictures so that companies don't have to pay actual artists.

If I had a choice, I'd say no AI in the arts; save it for the data processing applications and leave the art to the humans.

[–] GregorGizeh@lemmy.zip 6 points 4 days ago

Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.

Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.

Legislation

[–] MoogleMaestro@lemmy.zip 5 points 4 days ago* (last edited 4 days ago)

What I want from AI companies is really simple.

We have a thing called intellectual property in the United States of America. If I decided to make a Jellyfin instance that I charged access to, containing material I didn't own, somehow advertising this service on the stock market as a publicly traded company, you would bet your ass that I'd have a 1 way ticket to a defense seat in court.

AI companies, otherwise, operate entirely on data they don't own and don't pay licensing for ANY of the materials that are used to train their neural networks. So, in their eyes, any image, video (tv show/movie) or book that happens to be posted on the Internet is fair game in their eyes. This isn't how intellectual property works for individuals, so why exactly would a publicly traded company have an exception to this rule?

I work a lot in the world of FOSS and have a firm understanding that just because code is there doesn't make it yours. This is why we have the GPL for licensing. In fact, I'll take it a step further and say that the entirety of AI is one giant licensing nightmare, especially coding AI that isn't actually attributing license details with the code they're sampling from. (Sampling code being notably different than, say, learning from. Learning implies self-agency, and not corporate ownership.)

It feels to me that the AI bubble has largely been about pushing AI so hard and fast that people were investing in something with a dubious legal state in the US. Nobody stopped to ask whether or not the data that Facebook had on their website (for example, they aren't alone in this) was actually theirs to own, and what the repercussions for these types of decisions are.

You'll also note that Tech and Social Media companies are quick to take ownership of data when it benefits them (artists works, intellectual property that isn't theirs, random user posts about topics) and quick to deny ownership when it becomes legally burdensome (CSAM, illicit drug deals, etc.) to a degree that no individual would be granted. Hell, I'm not even sure a "small" tech startup would be granted this level of double-speak and hypocrisy.

With this in mind, I am simply asking that AI companies pay for the data that they're using to train AI. Additionally, laws must be in place that allows for the auditing of all materials used to train an AI with the legal intent of verifying that all parties are paid accordingly. This is how every other business works. If this were somehow granted an exception, wouldn't it be braindead easy to run every "service" through an AI layer in order to bypass any and all copyright laws?

Otherwise, if facebook and others want to claim that data hosted on their website is theirs to own and train off of -- well, great, but there should be no exceptions to this and they should not be allowed to host materials they then have no ownership over. So pictures of IP they don't own or materials they want to claim they have no ownership over must be removed from the platform. I would much prefer the first of these two options, however.

edit: I should note, that AI for educational purposes could be granted an exception for this under fair use (for university) but would still also be required to site all sources used to produce the works in question (which is normal for academics, in the first place.) and would also come with some strict stipulations on using this AI as a "product" (it would basically be moot, much like some research papers). This basically the furthest I'm willing to give these companies.

[–] Rhaedas@fedia.io 5 points 4 days ago

I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people's work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn't be, but could be) forced to shut down or start over properly. But it's far too late now since it's everywhere there is a GPU running, even if models don't progress past current state.

That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there's commonality in regards to not being sure what's going on inside the box and if it's really doing what it's told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it's too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it's the easiest way to throw money at a problem and reap profits, which is all they care about.

[–] Zwuzelmaus@feddit.org 5 points 4 days ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[–] Soapbox1858@lemm.ee 5 points 3 days ago

I think many comments have already nailed it.

I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.

I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

[–] JTskulk@lemmy.world 5 points 4 days ago

2 chicks at the same time.

[–] SinningStromgald@lemmy.world 4 points 4 days ago* (last edited 4 days ago)

Ideally the whole house of cards crumbles and AI goes the way of 3D TV's, for now. The world as it is now is not ready for AGI. We would quickly end up in a " I have no mouth and I must scream" scenario.

Otherwise, what everyone else has posted are good starting points. I would just add that any data centers used for AI have to be powered 100% by renewable energy.

[–] sntx@lemm.ee 4 points 4 days ago
[–] hogmomma@lemmy.world 4 points 4 days ago

I'd like to see it used for medicine.

[–] rekabis@lemmy.ca 4 points 3 days ago* (last edited 3 days ago) (1 children)

Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.

Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.

I don’t avoid AI because it is bad.

I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.

[–] Tessellecta@feddit.nl 4 points 2 days ago

I don't think that the forcing of an answer is the source of the problem you're describing. The source actually lies in the problems that the AI is taught to solve and the data it is provided to solve the problem.

In the case of medical image analysis, the problems are always very narrowly defined (e.g. segmenting the liver from an MRI image of scanner xyz made with protecol abc) and the training data is of very high quality. If the model will be used in the clinic, you also need to prove how well it works.

For modern AI chatbots the problem is: add one word to the end of the sentence starting with a system prompt, the data provided is whatever they could get on the internet, and the quality controle is: if it sounds good it is good.

Comparing the two problems it is easy to see why AI chatbots are prone to hallucination.

The actual power of the LLMs on the market is not as glorified google, but as foundational models that are used as pretraining for actual problems people want to solve.

[–] mrodri89@lemmy.zip 4 points 3 days ago

Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.

I also think that AI is another step into government spying in a more efficient manner.

Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)

Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.

They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.

[–] ch00f@lemmy.world 4 points 4 days ago

I want everyone to realize that the only reason AI seems intelligent is because it speaks English.

[–] njm1314@lemmy.world 4 points 4 days ago

Just Mass public hangings of tech Bros.

[–] RandomVideos@programming.dev 4 points 3 days ago* (last edited 3 days ago)

It would be amazing if chat and text generation suddenly disappeared, but thats not going to happen

It would be cool to make it illegal to not mark AI generated images or text and not have them forced to be seen

[–] mesamunefire@piefed.social 4 points 3 days ago

I think its important to figure out what you mean by AI?

Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

[–] Tahl_eN@lemmy.world 4 points 4 days ago (1 children)

I'm not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.

I'm deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it's not.

I haven't gone out of my way to use AI anything, but it's been stuffed into everything. And it's truly bad at it's job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI's answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn't know to ask when it doesn't understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they're real and not hallucinations.

It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn't be allowed to suck down as much electricity as it does.

load more comments (1 replies)

I'm not against AI itself—it's the hype and misinformation that frustrate me. LLMs aren't true AI - or not AGI as the meaning of AI has drifted - but they've been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they're still early-stage software, causing harm through misinformation and widespread copyright issues. They're being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

[–] yarr@feddit.nl 3 points 2 days ago* (last edited 2 days ago)

My favorite one that I've heard is: "ban it". This has a lot of problems... let's say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.

Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.

If you want to see a parallel to this, check out Japan's reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn't sit still either. A temporary reprieve.

The more aggressive of you will say, this is no problem, let's push for a worldwide ban. Good luck with that. For almost any issue on Earth, I'm not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.

AI is here. It's like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.

The question shouldn't be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What's the right balance? The genie isn't going back in the bottle, no matter how hard you wish.

[–] Sunflier@lemmy.world 3 points 3 days ago* (last edited 3 days ago)

Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.

I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.

[–] PlzGivHugs@sh.itjust.works 2 points 4 days ago

I think two main things need to happen: increased transparency from AI companies, and limits on use of training data.

In regards to transparency, a lot of current AI companies hide information about how their models are designed, produced, weighted and use. This causes, in my opinion, many of the worst effects of current AI. Lack of transparency around training methods mean we don't know how much power AI training uses. Lack of transparency in training data makes it easier for the companies to hide their piracy. Lack of transparency in weighting and use means that many of the big AI companies can abuse their position to push agendas, such as Elon Musk's manipulation of Grok, and the CCP's use of DeepSeek. Hell, if issues like these were more visible, its entirely possible AI companies wouldn't have as much investment, and thus power as they do now.

In terms of limits on training data, I think a lot of the backlash to it is over-exaggerated. AI basically takes sources and averages them. While there is little creativity, the work is derivative and bland, not a direct copy. That said, if the works used for training were pirated, as many were, there obviously needs to be action taken. Similarly, there needs to be some way for artists to protect or sell their work. From my understanding, they technically have the legal means to do so, but as it stands, enforcement is effectively impossible and non-existant.

load more comments
view more: ‹ prev next ›