this post was submitted on 17 Nov 2025
985 points (98.9% liked)
memes
18046 readers
2437 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads/AI Slop
No advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's a lot of ink spilled on 'AI safety' but I think the most basic regulation that could be implemented is that no model is allowed to output the word "I" and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no 'I' for an LLM.
Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.
Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.
It doesn't even approach discussing/satirizing a relevant issue with them.
It's basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.
No, it's pretty much the opposite. As it stands, one of the biggest problems with 'AI' is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as 'I think...' or 'I am...' makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It's part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.
This has to be the least informed take I have seen on anything ever. It literally dismisses all the most important issues with AI and pretends that the "real" problem (as if there is only one that matters) is about people misunderstanding it in a way I see no one doing.
It's clear to me you must be so deep into an anti AI bubble you have no idea how people who use AI think about it, how its used, why its used, or what the problems with it are.