this post was submitted on 01 Dec 2025
448 points (93.1% liked)

Showerthoughts

38359 readers
719 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Looks so real !

top 50 comments
sorted by: hot top controversial new old
[–] Thorry@feddit.org 72 points 2 days ago* (last edited 2 days ago) (1 children)

Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

[–] scytale@piefed.zip 2 points 2 days ago

Can’t burn something that doesn’t exist. /s

[–] HowAbt2day@futurology.today 33 points 2 days ago

I had a poster in ‘86 that I wanted to come alive.

[–] biotin7@sopuli.xyz 30 points 1 day ago (1 children)

Thank you for calling it an LLM.

load more comments (1 replies)
[–] finitebanjo@piefed.world 24 points 2 days ago (17 children)

And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.

[–] chicken@lemmy.dbzer0.com 18 points 2 days ago

That kind of painting seems more likely to come alive

[–] forrgott@lemmy.zip 7 points 2 days ago (1 children)

New fear unlocked!

... What the hell, man?!

ಥ_ಥ

[–] Bronzebeard@lemmy.zip 3 points 2 days ago (1 children)

Bro have you never seen a Scooby Doo episode? This can't be a new concept for you...

load more comments (1 replies)
[–] LiveLM@lemmy.zip 5 points 2 days ago (2 children)
[–] finitebanjo@piefed.world 5 points 2 days ago

I tried to submit an SCP once but theres a "review process" and it boils down to only getting in by knowing somebody who is in.

[–] peopleproblems@lemmy.world 5 points 2 days ago

Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.

And that has between seleven and 14+e^πi^ fingers

load more comments (13 replies)
[–] Kyrgizion@lemmy.world 23 points 2 days ago (3 children)

As long as we can't even define sapience in biological life, where it resides and how it works, it's pointless to try and apply those terms to AI. We don't know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

[–] billwashere@lemmy.world 3 points 2 days ago

Pointless and maybe a little reckless.

[–] daniskarma@lemmy.dbzer0.com 3 points 2 days ago (1 children)

We don't know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

I don't think LLMs will create intelligence, but I don't think we need to solve everything about human intelligence before having machine intelligence.

[–] Perspectivist@feddit.uk 7 points 2 days ago (2 children)

Though in the case of consciousness - the fact of there being something it's like to be - not only don't we know what causes it or how it works, but we have no way of measuring it either. There's zero evidence for it in the entire universe outside of our own subjective experience of it.

load more comments (2 replies)
load more comments (1 replies)
[–] thethunderwolf@lemmy.dbzer0.com 20 points 1 day ago* (last edited 1 day ago) (1 children)

Painting?

"LLMs are a blurry JPEG of the web" - unknown (I've heard it as an unattributed quote)

I think it originated in this piece by Ted Chiang a couple years ago.

[–] Jhex@lemmy.world 15 points 2 days ago

The example I gave my wife was "expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school"

[–] MercuryGenisus@lemmy.world 10 points 1 day ago (2 children)

Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative"

The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time. Maybe it's a good thing they aren't like us.

[–] Sconrad122@lemmy.world 14 points 1 day ago (1 children)

Alternative way to phrase it, we don't train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, "they" is anthropomorphizing LLM AI much more than it deserves, it's not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data

[–] Aeri@lemmy.world 4 points 1 day ago

Sometimes. I feel like LLM technology and it's relationship with humans is a symptom of how poorly we treat each other.

[–] Kolanaki@pawb.social 8 points 1 day ago* (last edited 1 day ago)

The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time

Elon is trying really with Grok, tho.

[–] Tracaine@lemmy.world 9 points 2 days ago (1 children)

I don't expect it. I'm going to talk to the AI and nothing else until my psychosis hallucinates it.

load more comments (1 replies)
[–] nednobbins@lemmy.zip 9 points 1 day ago (15 children)

I can define "LLM", "a painting", and "alive". Those definitions don't require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it's alive.

I'm not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it's like Ebert claiming that, "Video games can never be art".

[–] khepri@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (4 children)

Absolutely everything requires assumptions, even our most objective and "laws of the universe" type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

load more comments (4 replies)
load more comments (14 replies)
[–] bampop@lemmy.world 8 points 1 day ago* (last edited 1 day ago) (4 children)

People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don't think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn't believe it experiences or feels anything as I do, even though it claims to do so. Especially if it's based on an LLM, since they are superficial imitations by design.

[–] yermaw@sh.itjust.works 4 points 10 hours ago (3 children)

Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.

load more comments (3 replies)
load more comments (3 replies)
[–] Lembot_0005@lemy.lol 7 points 2 days ago

Good showering!

[–] qyron@sopuli.xyz 7 points 1 day ago (1 children)

It's achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.

Or boredom and starvation

[–] Luisp@lemmy.dbzer0.com 7 points 2 days ago

The Eliza effect

[–] j4k3@lemmy.world 5 points 2 days ago* (last edited 11 hours ago)

The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even tried to direct that. Quite the opposite, everything has been trying to kill you from the very start.

[–] Jankatarch@lemmy.world 5 points 2 days ago

Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.

[–] Perspectivist@feddit.uk 4 points 2 days ago

Fair and flawless comparison. I've got nothing to add.

[–] altphoto@lemmy.today 3 points 1 day ago* (last edited 1 day ago)

They have invented a thing that needs someone to want something for it to do it. We have yet to see an artificial EGO. An AEGO.

It reminds me of the reaction of the public to 1896 documentary The Arrival of a Train at La Ciotat Station. https://en.wikipedia.org/wiki/L%27Arriv%C3%A9e_d%27un_train_en_gare_de_La_Ciotat

[–] bss03@infosec.pub 3 points 17 hours ago

Clair Obscur: Expedition 33Clair Obscur: Expedition to meet the Dessandre Family

load more comments
view more: next ›