this post was submitted on 01 Dec 2025
18 points (95.0% liked)

Hacker News

3269 readers
349 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] stoy@lemmy.zip 5 points 2 weeks ago

Just as a few months ago, when another AI/LLM destroyed another project in a similar way, this happened due the failure to classify AI/LLM risks properly.

  1. In both these instances the person treats the AI/LLM as a person capable of reason, it is not, remember what Isaac Asimov wrote about the robots in his universe, they are capable of extreme logic, but can't deal with reason at all. The same applies here, the AI/LLM can absolutely be logical, but it can't use reason.
  2. Failure to treat the AI/LLM as a potentially bad actor, the person looked at the AI/LLM, probably ran a few simple tests, then granted it full access to highly sensitive files. Since the software is quite good at chatting with humans like we chat with each other, it makes the developer think they and trust it as if it were a person.