Next step how many r in Lollapalooza
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
Next step how many r in Lollapalooza
Incredible
Agi lost
Try it with o3 maybe it needs time to think π
Apparently, this robot is japanese.
Obligatory 'lore dump' on the word lollapalooza:
That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.
Note/Rant on the meaning of this term
The current merriam webster and dictionary.com definitions of this term meaning 'an outstanding or exceptional or extreme thing' are wrong, they are too broad.
While historical usage varied, it almost always appeared as a noun describing a gathering of many people, one that was so lively or spectacular that you would be exhausted after attending it.
When it did not appear as a noun describing a lively, possibly also 'star-studded' or extravagant, party, it appeared as a term for some kind of action that would cause you to be bamboozled, discombobulated... similar to 'that was a real humdinger of a blahblah' or 'that blahblah was a real doozy'... which ties into the effects of having been through the 'raucous party' meaning of lolapalooza.
So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.
An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.
In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...
Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.
::: spoiler racist and nsfw historical example of / evidence for this
https://www.ep.tc/howtospotajap/howto06.html
:::
Now, some people will say this is a total myth, others will say it is not.
My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.
My Grandpa is also another source for what 'lolapalooza' actually means.
Biggest threat to humanity
I know thereβs no logic, but itβs funny to imagine itβs because itβs pronounced Mrs. Sippy
And if it messed up on the other word, we could say because itβs pronounced Louisianer.
It is going to be funny those implementation of LLM in accounting software
Interestingβ¦.troubleshooting is going to be interesting in the future
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
LLM wasnβt made for this
There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.
Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
That's modern LLMs in a nutshell.
You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:
"Tell me more about your cousins," Rorschach sent.
"Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."
"We'd like to know about this tree."
Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."
"Well, it asked for clarification," Bates pointed out.
"It asked a follow-up question. Different thing entirely."
Bates was still out of the loop. Szpindel was starting to get it, though.. .
There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.
It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit
then continue to shill it for use cases it wasn't made for either
The only thing it was made for is "spicy autocomplete".
It's all about weamwork π€
teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the
The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end The end is never the end
Ah a fellow stanley parable enjoyer, love to see it!
*the end is never the end is never the end
Now ask how many asses there are in assassinations
Man AI is ass at this
*laugh track*
I really like checking these myself to make sure itβs true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is β¦β¦. remarkable!)
"Let me know if you'd like help counting letters in any other fun words!"
Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.
I want an option to select Marvin the paranoid android mood: "there's your answer, now if you could leave me to wallow in self-pitty"
Here I am, emissions the size of a small country, and they ask me to count letters...
How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R
We are fecking doomed!
I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that "Obama" contains 1 T.
We gotta raise the bar, so they keep struggling to make it βbetterβ
My attempt
0000000000000000
0000011111000000
0000111111111000
0000111111100000
0001111111111000
0001111111111100
0001111111111000
0000011111110000
0000111111000000
0001111111100000
0001111111100000
0001111111100000
0001111111100000
0000111111000000
0000011110000000
0000011110000000
Btw, I refuse to give my money to AI bros, so I donβt have the βlatest and greatestβ
Tested on ChatGPT o4-mini-high
It sent me this
0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0
0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0
0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0
0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
I asked it to remove the spaces
0001111100000000
0011111111000000
0011111110000000
0111111111100000
0111111111110000
0011111111100000
0001111111000000
0011111100000000
0111111111100000
1111111111110000
1111111111110000
1111111111110000
1111111111110000
0011100111000000
0111000011100000
1111000011110000
I guess I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good
I don't get it
It used to reply 2 until this new upgrade. But now after 14 min the new update give you the right answer
interesting
I'm not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that's how the information is stored. So LLM never sees the actual word.
Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.
This is key because, I can't emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.