this post was submitted on 25 Apr 2025
365 points (96.4% liked)

Technology

69391 readers
2552 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won't surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone's behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer's function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

you are viewing a single comment's thread
view the rest of the comments
[–] Ulrich@feddit.org 113 points 3 days ago (7 children)

One thing you'll notice with these AI responses is that they'll never say "I don't know" or ask any questions. If it doesn't know it will just make something up.

[–] Nemean_lion@lemmy.ca 50 points 3 days ago (2 children)

Sounds like a lot of people I know.

[–] anomnom@sh.itjust.works 2 points 2 days ago

It was trained in the internet. Everybody else is wrong there.

[–] Ulrich@feddit.org 2 points 2 days ago

Do you listen to those people or ask them question about things you want to learn more about?

[–] chonglibloodsport@lemmy.world 34 points 3 days ago (1 children)

That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!

The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.

A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!

[–] teft@lemmy.world 2 points 2 days ago (1 children)

60fps Next Generation makes my brain hurt. It’s like I’m watching a soap opera.

[–] CosmoNova@lemmy.world 12 points 3 days ago* (last edited 3 days ago) (1 children)

And it’s easy to figure out why or at least I believe it is.

LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)

My guess is they act all knowingly because of how interactions work on the internet. Plus they can‘t tell fact from fiction to begin with and would just randomly say they don‘t know if you tried to train them on that I guess.

[–] vxx@lemmy.world 8 points 3 days ago

The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.

[–] WildPalmTree@lemmy.world 8 points 3 days ago (1 children)
[–] JeremyHuntQW12@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (1 children)

Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

Your search - "yellow is a true badger" meaning - did not match any documents.

Suggestions:

Make sure that all words are spelled correctly. Try different keywords. Try more general keywords. Try fewer keywords.


definition of saying yellow is a true badger

The saying "yellow is a true badger" is not a standard or recognized idiom. The phrase "that's the badger" (or similar variations) is a British idiom meaning "that's exactly what I was looking for" or "that's the right thing". The term "yellow" is often used to describe someone who is cowardly. Therefore, there's no established meaning or relationship between "yellow" and "true badger" in the way the phrase "that's the badger" is used.

still didn't work.

[–] WildPalmTree@lemmy.world 1 points 1 day ago

That was my point. In think you are reading two comments into one.

[–] 0xSim@lemdro.id 7 points 3 days ago

And it's by design. Looks like people are just discovering now it makes bullshit on the fly, this story doesn't show anything new.

[–] sp3ctr4l@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago)

As an Autist, I find it amazing that... after a lifetime of being compared to a robot, an android, a computer...

When humanity actually does manage to get around to creating """AI"""... the AI fundamentally acts nothing like the general stereotype of fictional AIs, as similar to how an Autistic mind tends to evaluate information...

No, no, instead, it acts like an Allistic, Neurotypical person, who just confidently asserts and assumes things that it basically pulls out of its ass, often never takes any time to consider its own limitations as it pertains to correctly assessing context, domain specific meanings, more gramatically complex and ambiguous phrases ... essentially never asks for clarifications, never seeks out addtional relevant information to give an actually useful and functional reply to an overly broad or vague question...

Nope, just barrels forward assuming its subjective interpretation of what you've said is the only objectively correct one, spouts out pithy nonsense... and then if you actually progress further and attempt to clarify what you actually meant, or ask it questions about itself and its own previous statements... it will gaslight the fuck out of you, even though its own contradictory / overconfident / unqualified hyperbolic statements are plainly evident, in text.

... Because it legitimately is not even aware that it is making subjective assumptions all over the place, all the time.

Anyway...

collapsed inline media

Back to 'Autistic Mode' for Mr. sp3ctr4l.