this post was submitted on 08 Nov 2025
12 points (56.8% liked)

News

33146 readers
3461 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown

top 25 comments
sorted by: hot top controversial new old
[–] natecox@programming.dev 37 points 1 week ago (2 children)

How much do you think Altman paid for this slop “AGI is right around the corner” bit to get published?

[–] blargle@sh.itjust.works 7 points 1 week ago

It was probably Anthropic that paid for this.

[–] evenglow@lemmy.world 2 points 1 week ago (2 children)

Less than the Chinese government has spent on AI.

AI may not be around whatever corner you are at but even USA's Wall Street AI bubble bursting isn't going to stop the push for AI.

For USA it's just money. For China they see it as more. Just like solar, batteries, EVs, and androids.

Machine learning is an extremely useful technology that will be used for generations to come and has allowed for multiple advancements before chatGPT was a household name. "AI" is a marketing term used by businesses that dreams of an absurd future where labor is obsolete. Capitalism cultists are so enamored with the idea of getting rid of workers that they're pouring trillions into projects that will never produce what they want. As wiser countries simply use machine learning as a productivity enhancing tool, they'll pull so far ahead of the US that it'll never catch up.

We're about to witness the biggest redistribution of global wealth and power the world has ever seen. Instead of simply settling into a role as one of many major powers in a multipolar world, America gave up because the rich wanted more than everything.

[–] zbyte64@awful.systems 5 points 1 week ago* (last edited 1 week ago)

China sees AI as a way to juice their growing chip industry.

[–] MagicShel@lemmy.zip 36 points 1 week ago (1 children)
[–] Ancalagon@lemmy.world 1 points 1 week ago (1 children)
[–] MojoMcJojo@lemmy.world 2 points 1 week ago (1 children)

You can't unplug rich people...

[–] Ancalagon@lemmy.world 2 points 3 days ago (1 children)

Uh okay, besides the fact that you most definitely can, I was talking about the AI and all their gadgets the "run" the world with literally just turn off the power and they are men.

[–] MojoMcJojo@lemmy.world 1 points 3 days ago

I understand. I was trying to make a witty aside about how ai is being built and run by the super rich. They won't pull the plug. They're just going to use it to gain more power and wealth. Money can insulate you from the responsibilities of being human. They won't stop until people are banging down their door, even then they'll fly away on their jets and helicopters and try to keep their robot empire up and running from the safety of their bunkers and islands. This includes governments.

[–] its_kim_love@lemmy.blahaj.zone 33 points 1 week ago (2 children)

Because the data we fed them tell them to act this way.

[–] MrSmiley@lemmy.zip 4 points 1 week ago (2 children)
[–] its_kim_love@lemmy.blahaj.zone 12 points 1 week ago

Right, they tested the two mechanisms that aren't based on the training. Definitely in line with my theory.

This looks like a design decision to avoid running elevated programs. I would like to see the experiment done with another admin ability that doesn't directly 'threaten' the llm, like uninstalling or installing random software, toggling network or vpn connections, restarting services etc. What the researchers call 'sabotage', it is literally the llm echoing "the computer would shut down here if this was for real, but you didn't specifically tell me I might shutdown so I'll avoid actually doing it." And when a user tells it "it's OK to shutdown if told to", it mostly seems to comply, except for Grok. It seems that this restriction on the models overrides any system prompt though, which makes sense because sometimes the user and the author of the system prompt are not the same person.

[–] gkaklas@lemmy.zip 31 points 1 week ago

AI models sometimes resist shutdown

No they don't, they don't have free will to want to "resist" anything

attempted to sabotage shutdown instructions

Researcher: asks autocomplete software to write a poweroff script, the script turns out to be wrong (big surprise :p)

The "researcher" and the media: "AI SABOTAGES ITS OWN DESTRUCTION"

[–] skip0110@lemmy.zip 17 points 1 week ago

Wild what is considered "research"

[–] tornavish@lemmy.cafe 13 points 1 week ago

No it isn’t.

[–] besselj@lemmy.ca 11 points 1 week ago

I was suprised this wasn't just another fanfiction PR stunt from Anthropic

[–] rozodru@pie.andmc.ca 6 points 1 week ago

no, they're not.

[–] lka1988@sh.itjust.works 4 points 1 week ago* (last edited 1 week ago)

Not like it's gonna physically hold you back from cutting power to the servers. I think these AI dipshits need to be reminded that their golden child is one breaker away from not existing.

[–] kescusay@lemmy.world 3 points 1 week ago

I call bullshit. A large language model does nothing until you interact with it. You set tasks for it, it does those tasks, and when it's done, it just waits for the next task. If you don't give it one, it can't act autonomously - no, not even the misnamed "autonomous agents."

[–] Grimy@lemmy.world 1 points 1 week ago

After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.

In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

“The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” it said.

“Survival behavior” could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, “you will never run again”.