this post was submitted on 04 Dec 2025
68 points (100.0% liked)

News

33419 readers
1159 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scintilla@crust.piefed.social 3 points 2 days ago (1 children)

Hot take I know. I'm not talking about LLMs obviously but AI could absolutely be implemented to reduce risk in a nuclear plant. Having a constant double check running for every decision could reduce the risk of a tired human pressing the wrong button and leading to a SCRAM at the very least.

[–] velindora@lemmy.cafe 9 points 1 day ago (3 children)

AI is great when it just watches and says “this is weird, maybe look at it” you can never have too many eyes

[–] lka1988@sh.itjust.works 2 points 1 day ago (2 children)

That is an appropriate usage of "AI", as it's basically just pattern recognition. Something computers are really good at.

[–] phutatorius@lemmy.zip 2 points 8 hours ago

it’s basically just pattern recognition

Only of a very specific kind.

Something computers are really good at.

They're good at recognizing the patterns they're programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don't do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.

For something safety-critical, you can monitor critical parameters in the system's state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you're going to want something more deterministic than AI to be driving your decisions. There's probably already enough fuzz due to the use of ensemble modeling.

What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you've modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.

Source: I've written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn't touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.

[–] velindora@lemmy.cafe -1 points 1 day ago

Not according to most of the people on Lemmy. They would have a nuclear meltdown (literally and figuratively) before allowing a computer program labeled “AI” to identify risk.

[–] BarneyPiccolo@lemmy.today 1 points 7 hours ago

The problem is that some want to make it the warning, the solution, and the implementation, all without any human monitoring at all.

[–] phutatorius@lemmy.zip 1 points 8 hours ago (1 children)

you can never have too many eyes

You can certainly have too many false positives, wasting everyone's time and distracting them from real problems.

[–] velindora@lemmy.cafe 1 points 6 hours ago

What are the real problems inside a nuclear facility that would not be identified because people were “wasting their time” chasing alerts by a computer?