Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me
It feels like we're being delivered the sort of stuff we'd consider flim-flam if a human did it, but lapping it up bevause the machine did it.
"Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!" If you hired a human who acted like that, we'd have them on an improvement plan in days and sacked in weeks.
If we're talking realm of pure fantasy: destroy it.
I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.
But a lot of sci-fi doesn't really address the run up to AI, in fact a lot of it just kind of assumes there'll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.
Put it out of its misery.
How do you "destroy it"? I mean, you can download an open source model to your computer right now in like five minutes. It's not Skynet, you can't just physically blow it up.
OP asked what people wanted to happen, and even later "destroy gen AI" as an option. I get it is not realistically feasible, but it's certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I'm sure they realize it's not realistic.
Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let's stop making up fantasy's about AI it's bullshit in truth.
The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.
As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.
When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)
Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.
I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.
The solution is easy and right there in front of everyone's eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.
This will literally solve every single problem people have other than energy use which is a fake problem to begin with.
Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.
Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.
I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).
I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).
I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.
Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.
AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.
(Ignoring all the stolen work to train the models for a minute)
It's got its uses and potential, things like translations, writing prompts, or a research tool.
But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.
The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.
Lastly the AI relationships are not good.
We're making the same mistake with AI as we did with cars; not planning human future.
Cars were designed to atrophy muscles, and polluted urban planning and the air.
AI is being designed to atrophy brains, and pollutes the air, the internet, public discourse, and more to come.
We should change course towards AI that makes people smarter, not dumber: AI-aided collaborative thinking.
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen
Not destroying but being real about it.
It's flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)
Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM "it's for the greater good". That's bullshit.
My fantasy is for "everyone" to realize there's absolutely nothing "intelligent" about current AI. There is no rationalization. It is incapable of understanding & learning.
ChatGPT et al are search engines. That's it. It's just a better Google. Useful in certain situations, but pretending it's "intelligent" is outright harmful. It's harmful to people who don't understand that & take its answers at face value. It's harmful to business owners who buy into the smoke & mirrors. It's harmful to the future of real AI.
It's a fad. Like NFTs and Bitcoin. It'll have its die-hard fans, but we're already seeing the cracks - it's absorbed everything humanity's published online & it still can't write a list of real book recommendations. Kids using it to "vibe code" are learning how useless it is for real projects.
More regulation, supervised development, laws limiting training data to be consensual.
Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."
Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.
(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)
I’d like for it to be forgotten, because it’s not AI.
Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.
The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.
Additionally, data usage consent for generative AI should be opt-in. Not opt-out.
force companies to pay for the data they scraped from copyrighted works. break up the largest tech conglomerates so they cannot leverage their monopolistic market positions to further their goals, which includes the investment in A.I. products.
ultimately, replace the free market (cringe) with a centralized computer system to manage resource needs of a socialist state
also force Elon Musk to receive a neuralink implant and force him to hallucinate the ghostly impressions of spongebob squarepants laughing for the rest of his life (in prison)
Firings and jail time.
In lieu of that, high fines and firings.
My biggest issue with AI is that I think it's going to allow a massive wealth transfer from laborers to capital owners.
I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don't think this is a bad thing - that's what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.
What's worse, in the case of AI specifically it's functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society's collective knowledge for free to sell it back to us.
IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don't think it's something that should be primarily driven by private, for-profit companies.
AI overall? Generally pro. LLMs and generative AI, though, I'm "against", mostly meaning that I think it's misused.
Not sure what the answer is, tbh. Reigning in corporations would be good.
I do think we as a society need to radically alter our relationship to IP law. Right now we 'enforce' IP law in a way that benefits corporations but not individuals. We should either get rid of IP law altogether (which would protect people from corporations abusing the laws) or we should enforce it more strictly, and actually hold corporations accountable for breaking it.
If we fixed that, I think gen AI would be fine. But we aren't doing that.
AI models produced from copyrighted training data should need a license from the copyright holder to train using their data. This means most of the wild west land grab that is going on will not be legal. In general I'm not a huge fan of the current state of copyright at all, but that would put it on an even business footing with everything else.
I've got no idea how to fix the screeds of slop that is polluting search of all kinds now. These sorts of problems ( along the lines of email spam ) seem to be absurdly hard to fix outside of walled gardens.
See, I'm troubled by that one because it sounds good on paper, but in practice that means that Google and Meta, who can certainly build licenses into their EULAs trivially, would become the only government-sanctioned entities who can train AI. Established corpos were actively lobbying for similar measures early on.
And of course good luck getting China to give a crap, which in that scenario would be a better outcome, maybe.
Like you, I think copyright is broken past all functionality at this point. I would very much welcome an entire reconceptualization of it to support not just specific AI regulation but regulation of big data, fair use and user generated content. We need a completely different framework at this point.
Legislation
Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it
Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.
Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.
I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.