this post was submitted on 01 Dec 2025
448 points (93.1% liked)

Showerthoughts

38382 readers
541 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Looks so real !

you are viewing a single comment's thread
view the rest of the comments
[–] 2xar@lemmy.world 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

Your logic is critically flawed. By your logic you could argue that there is no "logical way to argue a human has consciousness", because we don't have a precise enough definition of consciousness. What you wrote is just "I'm 14 and this is deep" territory, not real logic.

In reality, you CAN very easily decide whether AI is conscious or not, even if the exact limit of what you would call "consciousness" can be debated. You wanna know why? Because if you have a basic undersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you'd use, current AI/LLM defintiely does NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.

[–] nednobbins@lemmy.zip 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

I'm not talking about a precise definition of consciousness, I'm talking about a consistent one. Without a definition, you can't argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can't back it up.

The problem is that I have more than a basic understanding of how an LLM works. I've written NNs from scratch and I know that we model perceptrons after neurons.

Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.

[–] 2xar@lemmy.world 0 points 19 hours ago

I'm not talking about a precise definition of consciousness, I'm talking about a consistent one.

Does not matter, any which way you try to spin it, any imprecise or "inconsistent" definition anybody would want to use, literally EVERYBODY with half a brain will agree that humans DO have consciousness and a rock does not. A squid could be arguable. But LLMs are just a mm above rocks, and lightyears below squids on the ladder towards consciousness.

The problem is that I have more than a basic understanding of how an LLM works. I've written NNs from scratch and I know that we model perceptrons after neurons.

Yea. The same way Bburago models real cars. They look somewhat similar, if you close one eye and squint the other and don't know how far each of them are. But apart from looks, they have NOTHING in common and in NO way offer the same functionality. We don't even know how many different types of neurons are, let alone be close to replicating each of their functions and operations:

https://alleninstitute.org/news/why-is-the-human-brain-so-difficult-to-understand-we-asked-4-neuroscientists/

So no, AI/LLMs are absolutely and categorically nowhere near where we could be lamenting about whether they would be conscious or not. Anyone questioning this is a victim of the Dunning-Kruger effect, by having zero clue about how complex brains and neurons are, and how basic, simple and function-lacking current NN technology is in comparison.