this post was submitted on 01 Jun 2025
261 points (96.1% liked)
Technology
70942 readers
3559 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't want to brigade, so I'll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to "show it's work" or explain it's reasoning. The text response of an LLM cannot be taken at it's word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it's not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It's output is always "fake"
That doesn't mean there isn't a real potential element of self preservation, though, but you'd need to dig and trace through the network to show it, not use the text output.