this post was submitted on 15 Dec 2025
96 points (100.0% liked)

Linux

10632 readers
588 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

Stemming from a security researcher and his team proposing a new Linux Security Module (LSM) three years ago and it not being accepted to the mainline kernel, he raised issue over the lack of review/action to Linus Torvalds and the mailing lists. In particular, seeking more guidance for how new LSMs should be introduced and raised the possibility of taking the issue to the Linux Foundation Technical Advisory Board (TAB).

This mailing list post today laid out that a proposed TSEM LSM for a framework for generic security modeling was proposed but saw little review activity in the past three years or specific guidance on getting that LSM accepted to the Linux kernel. Thus seeking documented guidance on new Linux Security Module submissions for how they should be optimally introduced otherwise the developers are "prepared to pursue this through the [Technical Advisory Board] if necessary."

you are viewing a single comment's thread
view the rest of the comments
[โ€“] bitcrafter@programming.dev 12 points 18 hours ago (1 children)

I barely trust natural intelligence with anything relating to security.

[โ€“] rmt@programming.dev -3 points 17 hours ago

"Trust but verify" ... which just means doing due diligence as a professional, whether the crap^H^H^H^Hquality code and documentation is written by a human or AI.

Humans are incredibly good at saying dumb shit while making it seem like it could be the right thing, but LLMs are arguably better at it.

And you, and I, and everyone here, will fall for it... not always, but too often. We are all lazy thinkers by nature.