Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
view the rest of the comments
How people dare not like the automatic bullshit machine pushed down their troat...
Seriously, genrative AI acomplishment are :
Yes. AI can be used for spam, job cuts, and creepy surveillance, no argument there, but pretending it’s nothing more than a corporate scam machine is just lazy cynicism. This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors, giving deaf people real-time conversations through instant transcription, translating entire languages on the fly, mapping wildfire and flood zones so first responders know exactly where to go, accelerating scientific breakthroughs from climate modeling to space exploration, and cutting out the kind of tedious grunt work that wastes millions of human hours a day. The problem isn’t that AI exists, it’s that a lot of powerful people use it selfishly and irresponsibly. Blaming the tech instead of demanding better governance is like blaming the printing press for bad propaganda.
Not the same kind of AI. At all. Generative AI vendors love this motte-and-bailey.
That's not a motte-and-bailey.
Arent those different types of AI?
I dont think anyone hating AI is referring to the code that makes enemies move, or sort things into categories
LLMs aren't artificial intelligence in any way.
They're extremely complex and very smart prediction engines.
The term artificial intelligence has been co-opted in hijacked for marketing purposes a long time ago.
The kind of AI that in general people expect to see is a fully autonomous self-aware machine.
If anyone has used any llm for any extended period of time they will know immediately that they're not that smart even chatgpt arguably the smartest of them all is still highly incapable.
What we do have to come to terms with is that these llms do have an application they have function and they are useful and they can be used in a deleterious way just like any technology at all.
If a program that can predict prices for video games based on reviews and how many people bought it can be called AI long before 2021, LLMs can too
One could have said many of the same thigs about a lot of new technologies.
The Internet, Nuclear, Rockets, Airplanes etc.
Any new disruptive technology comes with drawbacks and can be used for evil.
But that doesn't mean it's all bad, or that it doesn't have its uses.
Give me one real world use that is worth the downside.
As dev I can already tell you it's not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn't integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.
And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.
I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.
There are MANY examples of LLM's being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren't worth it, is ridiculous.
That's like saying "asbestos has some good uses, so we should just give every household a big pile of it without any training or PPE"
Or "we know leaded gas harms people, but we think it has some good uses so we're going to let everyone access it for basically free until someone eventually figures out what those uses might be"
It doesn't matter that it has some good uses and that later we went "oops, maybe let's only give it to experts to use". The harm has already been done by eager supporters, intentional or not.
No that is completely not what they are saying. Stop arguing strawmen.
It's not a strawman, it's hyperbole.
There are serious known harms and we suspect that there are more.
There are known ethical issues, and there may be more.
There are few known benefits, but we suspect that there are more.
Do we just knowingly subject untrained people to harm just to see if there are a few more positive usecases, and to make shareholders a bit more money?
How does their argument differ from that?
But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.
And seriously, I asked for something that's worth all the down side and you bring up clippy 2.0 ???
Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?
I genuily want to understand.
You asked for one example, I gave you one.
It's not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.
I find it very useful with no real drawbacks
I ask for an example making up for the downside everyone as to pay.
so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it. And again that's stuff siri and domotic tools where able to do since 2014 at a minimum.
Siri has privacy issues, and only works when connected to the internet.
What are the downsides of me running my own local LLM? I've named many benefits privacy being one of them.
Voice recognition is not limited to siri, I just used the most know exemple. Local assitant have existed long before LLMs and didn't require this much ressources. You are once again moving the goal post. Find one real world use that offset the downside.
I've already mention drafting documents and translating documents
Once again it's not enough to justify the cost.
LLM translation are hazardous at best and we already a lot of translation tools already. Templating systems are older than me and even so no one in their right mind should trust a non deterministic tool to draft documents.
That's simply not true, What translation tool is better at translating English to Afrikaans?
I'm just just picking a difficult language, I am Afrikaans look at my post history.
Are you going to neat-pick a point each time rather than addressing my argument as a whole ?
I'm french and I can tell you in a software developement context AI is worse than existing tool like deepL, maybe it work better in afrikaans and if that's the case good we finally have an use case ! sadly being ok at translating thing is not what thoose model are selled on and even if it were : it's still not worth the cost
I'll stop responding, no one is reading that far of a comment and you are not responding to my arguments in way that's productive.
DeepL has the same issues that a LLM has when it comes to translating.
You're still sending all your data to some server, it might be a bit more efficient than a LLM, not sure by how much. but it's essentially the same thing.
DeepL is essentially just a LLM specifically tuned for translations
The fact that was the best you could come up with is far more damning than not even having one.
I'm keeping it simple, that's a solid good use case, and what millions of people use ChatGPT for.
Neat trick, but it's not worth the headache of set up when you can do all that by getting off your chair and pushing buttons. Hell, you don't even have to get off your chair! A cellphone can do all that already, and you don't even need voice commands to do it.
Are you able to give any actual examples of a good use of an LLM?
Like it or not, that is an actual example.
I can lay in my bed and turn off the lights without touching my phone, or turn on certain muisic without touching my phone.
I could ask if I remembered to lock the front door etc.
But okay, I'll play your game, let's pretend that doesn't count.
I can use my local AI to draft documents or emails speeding up the process a lot.
Or I can used it to translate.
If you want to live your life like that, go for that's your choice. But I don't think those applications are worth the cost of running an LLM. To be honest I find it frivolous.
I'm not against LLMs as a concept, but the way they get shoved into everything without thought and without an "AI" free option is absurd. There are good reasons why people have a knee-jerk anti-AI reaction, even if they can't articulate it themselves.
It's not expensive for me to run a local LLM, I just use the hardware I'm already using for gaming. Electricity is cheap and most people with a gaming PC probably use more electricity gaming than they would running their own LLM and asking it some questions.
I'm also against shoving AI in evening, and not making it Opt-In. I'm also worried about privacy and concentration of power etc.
But just outright saying LLMs are bad is rediculous.
And saying there is no good reason to use them is rediculous. Can we stop doing that.
I don't personally know the monetary cost of running one of these things locally, and I should be more informed before I make sweeping statements.
Then we are on the same page
I didn't say that, in fact I said that I didn't have a problem with them as a concept, go back to the previous point for a reason why someone might have instant dislike of "AI"
I also didn't say that. I just said your examples weren't good uses of it. I happen to think that there are very good applications for this technology, but none of those are publicly available GenAI slop and soulless automation systems/assistants that are really just corporate spyware to collect advertising data.
If you want a "smart home" with voice commands because it make you feel like Tony Stark talking to Jarvis go right ahead, but don't pretend that your locally run LLM is what people are talking about when they level criticism against "AI" (or even if they just say 'AI bad')
You didn't say that no, but people here do say those things.
It's basically how this thread started.
I'm on the same page with a lot of the hate for AI and fears of it, but let's not pretend it's just all bad.
Just because it isn't all bad doesn't mean that a significant portion of it is in fact, bad.
Yes but IMO there is still a over reaction to it on Lemmy.
Let's not pretend LLMs are the devil
Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.
Can you imagine if after WW2, Werner Von Braun came to the USA and then just like... Gave every man woman and child a rocket, with no training? Good and evil wouldn't even come into, it'd be chaos and destruction.
Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.
It's not a matter of good and evil, it's a matter of harm.
The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.
Did you really compare every household having a nuclear reactor with people having access to AI?
How's is that even remotely a fair comparison.
To me the Internet being released on people and AI being released on people is more of a fair comparison.
Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.
You know that the public got trickle-fed the internet for decades before it was ubiquitous in everyone house, and then another decade before it was ubiquitous in everyone's pocket. People had literal decades to learn how to protect themselves and for the job market to adjust. During that time, there was lots of research and information on how to protect yourself, and although regulation mostly failed to do anything, the learning material was adapted for all ages and was promoted.
Meanwhile LLMs are at least as impactful as the internet, and were released to the public almost without notice. Research on it's affects is being done now that it's already too late, and the public doesn't have any tools to protect itself. What meager material in appropriate use exist hasn't been well researched not adapted to all ages, when it isn't being presented as "the insane thoughts of doomer Luddites, not to be taken seriously" by the AI supporters.
The point is that people are being handed this catastrophically dangerous tool, without any training or even research into what the training should be. And we expect everything to be fine just because the tool is easy to use and convenient?
These companies are being allowed to bulldoze not just the economy, and the mental resilience of entire generations, for the sake of a bit of shareholder profit.
Absolutely brain dead to compare the probability engine "AI" with no fundamental use beyond marketed value with a wide variety of truly useful innovations that did not involve marketing in their design.
We should ban computers since they are making mass surveillance easier. /s
we should allow lead in paint its easier to use /s
You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.