this post was submitted on 11 Jun 2025
861 points (98.8% liked)

Lemmy Shitpost

32342 readers
2530 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] RedstoneValley@sh.itjust.works 127 points 3 days ago (6 children)

It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)

[–] UnderpantsWeevil@lemmy.world 43 points 3 days ago* (last edited 3 days ago) (7 children)

LLM wasn’t made for this

There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"

The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.

When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.

Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.

That's modern LLMs in a nutshell.

[–] shalafi@lemmy.world 7 points 3 days ago (1 children)

You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:

"Tell me more about your cousins," Rorschach sent.

"Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."

"We'd like to know about this tree."

Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."

"Well, it asked for clarification," Bates pointed out.

"It asked a follow-up question. Different thing entirely."

Bates was still out of the loop. Szpindel was starting to get it, though.. .

[–] CitizenKong@lemmy.world 6 points 3 days ago* (last edited 3 days ago) (2 children)

Blindsight is such a great novel. It has not one, not two but three great sci-fi concepts rolled into one book.

One is artificial intelligence (the ship's captain is an AI), the second is alien life so vastly different it appears incomprehensible to human minds. And last but not least, and the most wild, vampires as a evolutionary branch of humanity that died out and has been recreated in the future.

[–] TommySalami@lemmy.world 4 points 2 days ago

My a favorite part of the vampire thing is how they died out. Turns out vampires start seizing when trying to visually process 90Β° angles, and humans love building shit like that (not to mention a cross is littered with them). It's so mundane an extinction I'd almost believe it.

[–] outhouseperilous@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago) (1 children)

Also, the extremely post-cyberpunk posthumans, and each member of the crew is a different extremely capable kind of fucked up model of what we might become, with the protagonist personifying the genre of horror that it is, while still being occasionally hilarious.

Despite being fundamentally a cosmic horror novel, and relentlessly math-in-the-back-of-the-book hard scifi it does what all the best cyberpunk does and shamelessly flirts with the supernatural at every opportunity. The sequel doubles down on this, and while not quite as good overall (still exceptionally good, but harder to follow) each of the characters explores a novel and sweet+sad+horrifying kind of love.

[–] CitizenKong@lemmy.world 1 points 2 days ago (1 children)

Oooh, I didn't even know it had a sequel!

I wouldn't say it flirts with the supernatural as much as it's with one foot into weird fiction, which is where cosmic horror comes from.

[–] outhouseperilous@lemmy.dbzer0.com 1 points 21 hours ago* (last edited 21 hours ago)

Characters in the sequel include a hive-mind of post-science innovation monks, a straight up witch who charges their monastery at the head of a zombie army, and a plotline about finding what the monks think might be god. And that first scene, which is absolute fire btw.

Primary themes include... Well the bit of exposition about needing to 'crawl off one mountain and cross a valley to reach higher peaks of understanding', and coping as a mostly baseline human surrounded by superintelligences, 'sufficiently advanced technology', etc.

[–] frostysauce@lemmy.world 4 points 3 days ago (1 children)

(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)

Wait, what was going on in August of '96?

Google Search premiered

That's a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and I've spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an "assistant". The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.

Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?

[–] Knock_Knock_Lemmy_In@lemmy.world 2 points 2 days ago (2 children)

a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately

The human approach could be to write a (python) program to count the number of characters precisely.

When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?

[–] UnderpantsWeevil@lemmy.world 2 points 2 days ago (1 children)

When people refer to agents, is this what they are supposed to be doing?

That's not how LLMs operate, no. They aggregate raw text and sift for popular answers to common queries.

ChatGPT is one step removed from posting your question to Quora.

[–] Knock_Knock_Lemmy_In@lemmy.world 0 points 2 days ago (2 children)

But an LLM as a node in a framework that can call a python library should be able to count the number of Rs in strawberry.

It doesn't scale to AGI but it does reduce hallucinations.

[–] UnderpantsWeevil@lemmy.world 0 points 2 days ago* (last edited 2 days ago) (1 children)

But an LLM as a node in a framework that can call a python library

Isn't how these systems are configured. They're just not that sophisticated.

So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.

Deepseek gets at the edges of this through their partitioned model. But you're still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.

It doesn’t scale to AGI but it does reduce hallucinations

It has to scale to AGI, because a central premise of AGI is a system that can improve itself.

It just doesn't match the OpenAI development model, which is to scrape and sort data hoping the Internet already has the solution to every problem.

[–] KeenFlame@feddit.nu 0 points 1 day ago

The only thing worse than the ai shills are the tech bro mansplainaitions of how "ai works" when they are utterly uninformed of the actual science. Please stop making educated guesses for others and typing them out in a teacher's voice. It's extremely aggravating

You'd still be better off starting with a 50s language processor, then grafting on some API calls.

[–] outhouseperilous@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago) (1 children)

No, this isn't what 'agents' do, 'agents' just interact with other programs. So like move your mouse around to buy stuff, using the same methods as everything else.

Its like a fancy diversely useful diversely catastrophic hallucination prone API.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 1 day ago (1 children)

'agents' just interact with other programs.

If that other program is, say, a python terminal then can't LLMs be trained to use agents to solve problems outside their area of expertise?

I just tested chatgpt to write a python program to return the frequency of letters in a string, then asked it for the number of L's in the longest placename in Europe.

''''

String to analyze

text = "Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch"

Convert to lowercase to count both 'L' and 'l' as the same

text = text.lower()

Dictionary to store character frequencies

frequency = {}

Count characters

for char in text: if char in frequency: frequency[char] += 1 else: frequency[char] = 1

Show the number of 'l's

print("Number of 'l's:", frequency.get('l', 0))

'''

I was impressed until

Output

Number of 'l's: 16

Yeah it turns out to be useless!

[–] Leet@lemmy.zip 1 points 1 day ago (1 children)

Can we say for certain that human brains aren’t sophisticated Chinese rooms…

[–] merc@sh.itjust.works 1 points 2 days ago

Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you ... That's modern LLMs in a nutshell.

I agree, but I think you're still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.

IMO, one of the key ideas with the Chinese Room is that there's an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be "fooled" when they're given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can't reason so they can't distinguish between "this is just a rephrasing of that meme" and "this is similar to that meme but distinct in an important way".

[–] REDACTED@infosec.pub 25 points 3 days ago (1 children)

There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.

[–] ouRKaoS@lemmy.today 3 points 3 days ago

If you want an even older example, the ghosts in Pac-Man could be considered AI as well.

[–] BarrelAgedBoredom@lemm.ee 25 points 3 days ago (2 children)

It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit

AGI is only a benchmark because it gets OpenAI out of a contract with Microsoft when it occurs.

[–] merc@sh.itjust.works 0 points 2 days ago (1 children)

You can even drop the "a" and "g". There isn't even "intelligence" here. It's not thinking, it's just spicy autocomplete.

[–] merc@sh.itjust.works 10 points 2 days ago

then continue to shill it for use cases it wasn't made for either

The only thing it was made for is "spicy autocomplete".

[–] outhouseperilous@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago)

I would say more "blackpilling", i genuinely don't believe most humans are people anymore after dealing with this.