this post was submitted on 15 Jul 2025
233 points (92.4% liked)

Showerthoughts

36007 readers
793 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.

It’s a meta version of VR.

(Meta meta, if you will.)

you are viewing a single comment's thread
view the rest of the comments
[–] 9bananas@feddit.org 2 points 21 hours ago

no, none of those are what i mean, that's way too specific to be useful.

a system exhibits intelligence when it can use existing insights to build entirely new insights.

a popular example is that no current "AI" can extrapolate from basic mathematical stipulations to more advanced ones.

(there's tons of example you could put here, but this is the one i like)

here's the example:

teach an LLM/DNN/etc. basic addition, subtraction, multiplication, and division.

give it some arbitrary, but large, number of problems to solve.

it will eventually encounter a division that isn't possible, but is not a divide-by-zero (which should be covered by the rules it was given).

then it will either:

  • throw an error
  • have an aneurysm
  • admit it can't do that (proving the point)
  • or lie through it's teeth, giving wrong answers (also proving the point)

...but what it will definitely NEVER do, is simply create a placeholder for that operation and give it a name: square root (or whatever ot calls it, that part isn't important).

it simply can't, because that would be a new insight, and that's something these systems aren't capable of.

a human (or a lot of them) would encounter these impossible divisions and eventually see a pattern in them and draw the proper conclusion: that this is a new bit of math that was just discovered! with new rules, and new applications!

even if it takes a hundred years and scores of them, humans will always, eventually, figure it out.

...but what we currently call "artificial intelligence" will simply never understand that. the machine won't do that, no matter how many machines you throw at the problem.

because it's not a matter of quantity, but of quality.

and that qualitative difference is intelligence!

(note: solving this particular math problem is a first step. it's unlikely that it will immediately lead to an AGI, but it is an excellent proof-of-concept)

this is also why LLMs aren't really getting any better; it's a structural problem that can't be solved with bigger data sets.

it's a fundamental design flaw we haven't yet solved.

current "AI"s are probably a part of the solution, but they are, definitely, not THE solution.

we've come closer to an AI, but we're not there.