this post was submitted on 15 Jul 2025
231 points (92.3% liked)
Showerthoughts
36007 readers
887 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But it's not simulated intelligence. It's literally just word association on steroids. There are no thoughts it brings to the table, just words that mathematically fit following the prompts.
Where do you draw the line for intelligence? Why would the capacity to auto complete tokens based on learned probabilities not qualify as intelligence?
This capacity may be part of human intelligence too.
This.
I have taught highschool teens about AI between 2018 and 2020.
The issue is we are somewhere between getting better at gambling (statistics, Markov chains, etc.) and human brain simulation (deep neural networks, genetic algorithms).
For many people it's important how we frame it. Is it random word generator with a good hit rate or is it a very stupid child?
Of course the brain is more advanced - it has way more neurons than an AI model has nodes, it works faster and we have years of "training data". Also, we can use specific parts of our brains to think, and some things are so innate we don't even have to think about it, we call them reflexes and they bypass the normal thinking process.
BUT: we're at the stage where we could technically emulate chunks of a human brain through AI models however primitive they are currently. And in it's basic function, brains are not really much more advanced than what our AI models already do. Although we do have a specific part for our brain just for languages, which means we get a little cheat code for writing text in comparison to AI, and similar other parts for creative tasks and so on.
So where do you draw the line? Do you need all different parts of a brain perfectly emulated to satisfy the definition of intelligence? Is artificial intelligence a word awarded to less intelligent models or constructs, or is it just as intelligent as human intelligence?
Imo AI sufficiently passes the vibe check on intelligence. Sure it's not nearly on the scale of a human brain and is missing it's biological arrangements and some clever evolutionary tricks, but it's similar enough.
However, I think that's neither scary nor awesome. It's just a different potential tool that should help everyone of us. Every time big new discoveries shape our understanding of the world and become a core part of our lives, there's so much drama. But it's just a bigger change, nothing more nothing less. A pile of new laws, some cultural shifts and some upgrades for our everyday life. It's neither heaven nor hell, just the same chunk of rock floating in space soup for another century.
I dunno, the power requirements would seem to be an ecological catastrophe in the making, except it's already happening.
Well, if we are not looking at all the disaster the hype is doing on so many levels (which is fine in the sense that technology and fools are different things), I draw the line at... intelligence, not simulation of hardware. I care lot less if something before me runs on carbon, metal or, say, sulfur than I care if it is intelligent
And as someone has already pointed out, even defining intelligence is damn hard, and different intelligence works differently (someone who is great at moving their body, like dancers or martial artists, is definitely more intelligent than me in quite a few areas, even if I know math or computers better than them). So... "artificial intelligence" as a bunch of algorithms (including LLM) etc - no problem with me, "artificial intelligence" as "this thing is thinking" or "this thing is just as good as a human artist/doctor/lawyer" - nah, bullshit
when it can come up with a solution it hasn't seen before.
that's the threshold.
that's the threshold for creative problem solving, which isn't all there is to intelligence, but i think it's fair to say it's the most crucial part for a machine intelligence.
It can come up with a brand new sentence that hasn't been written before. Does that count?
Maybe you mean a solution to a textbook math/physics problem, it most likely would be able to solve that too with tool use.
Or maybe you mean solving something like the Riemann Hypothesis?
no, none of those are what i mean, that's way too specific to be useful.
a system exhibits intelligence when it can use existing insights to build entirely new insights.
a popular example is that no current "AI" can extrapolate from basic mathematical stipulations to more advanced ones.
(there's tons of example you could put here, but this is the one i like)
here's the example:
teach an LLM/DNN/etc. basic addition, subtraction, multiplication, and division.
give it some arbitrary, but large, number of problems to solve.
it will eventually encounter a division that isn't possible, but is not a divide-by-zero (which should be covered by the rules it was given).
then it will either:
...but what it will definitely NEVER do, is simply create a placeholder for that operation and give it a name: square root (or whatever ot calls it, that part isn't important).
it simply can't, because that would be a new insight, and that's something these systems aren't capable of.
a human (or a lot of them) would encounter these impossible divisions and eventually see a pattern in them and draw the proper conclusion: that this is a new bit of math that was just discovered! with new rules, and new applications!
even if it takes a hundred years and scores of them, humans will always, eventually, figure it out.
...but what we currently call "artificial intelligence" will simply never understand that. the machine won't do that, no matter how many machines you throw at the problem.
because it's not a matter of quantity, but of quality.
and that qualitative difference is intelligence!
(note: solving this particular math problem is a first step. it's unlikely that it will immediately lead to an AGI, but it is an excellent proof-of-concept)
this is also why LLMs aren't really getting any better; it's a structural problem that can't be solved with bigger data sets.
it's a fundamental design flaw we haven't yet solved.
current "AI"s are probably a part of the solution, but they are, definitely, not THE solution.
we've come closer to an AI, but we're not there.
I mean to friends and family – people who have accepted it as smart.
I don’t know about you, but when I try to explain the concept of LLMs to people not in the tech field, their eyes glaze over. I’ve gotten several family members into VR, though. It’s an easier concept to understand.
if only we had a word for applying math to data to give the appearance of a complex process we don't really understand.
A simulation doesn't have to be the actual thing. It implies it literally isn't the true thing, which is kind of what you're saying.
Simulated Intelligence is certainly more accurate and honest than Artificial Intelligence. If you have a better term, what is it?
Large Language Model.
Doesn't seem to be catching on...
Professor Hotpants' Astounding Rhetorical Thingamajig
It's not just statistics. To produce a somewhat coherent sentence in English you need a model of the English language AND a world model.
If you ask a question like "an apple is on a glass, what happens if I remove the glass", the correct answer ("the apple will fall") is not a statistical property of the English language, but an emergent property of the world model.
My dog can do calculus but struggles with word association beyond treat, walk, vet and bath. Intelligence is hard to define.