This is about contributing code that was co-created with an llm like copilot. Not about adding "AI" features to fedora.
skilltheamps
Eh naja, "versucht" ist da harmlos ausgedrückt. Einfach Maintainer aussperren ist weder nett noch förderlich für so ein Projekt. Man hätte auch einen soft fork machen können der upstream trackt und Änderungen nach einer Prüfung übernimmt. Wenn man sich mit upstream gut stellt könnte man auch eine doppel-Lizenz Strategie umsetzen, indem Lizenzen für den CRA-kompatiblen soft-fork an andere kommerzielle Konsumer des Projekts verkauft werden, welche sich dafür das Review plus Risiko sparen können. Daraus wiederum könnte man die Reviews und generelle Projektunterstützung finanzieren.
Nicht-kommerzielle open-source Projekte sind von CRA ausgenommen. An der Stelle wo zuerst kommerziell verkauft wird fängt CRA an zu greifen. Wenn eine Firma in ein kommerzielles Produkt nicht-kommerzielles open-source einbaut muss sie selbst sicher stellen, dass das Produkt und damit auch seine Komponenten die CRA Anforderungen erfüllen.
Nicht-kommerzielles open-source ist keine Lieferkette. Es wird immer so geredet als gäbe es da einen "Lieferant", das ist nicht der Fall. Das ist einfach nur ein Projekt das herumschwirrt, und wenn ich mich daran bediene um das in ein kommerzielles Produkt verwandeln will um damit Geld zu verdienen, dann muss ich auch selbst dafür sorgen dass es den Regeln des Marktes entspricht.
This is not how redundancy works on cable cars. These systems are not copies of another, but different systems with different working principles. On systems with a pulling component (like the cable here) and a suspension component (like a suspension rope or rails), a safety brake on the cabin is only held open by the tension of the pulling cable. Should the pulling force bee too low, the brake clamps onto the suspension component.
Most of the time there's sadly no medial coverage of the safety systems. So with the accidents I followed either I don't know why the safety systems didn't work, or they were manipulated. For example in the 2021 case at Monte Mottarone, the brake was propped open with maintenance tools.
Given the age of the system in Lisbon, I hope it was updated to these safety standards. The most informative I could see was this image showing the underside of the wagon. It is still difficult to tell how it works in detail, but the thing protruding from the cable mount could be such a catching brake working on the inside of the cable guide I think. And to me it looks like the cable pulled out of the holder due to cracks in the holder.
Be cautious with the answers when asking things like this. In discussion boards like here many are (rightfully) very excited about selfhosting and eager to share what they learned, but may ("just") have very few years of experience. There's a LOT to learn in this space, and it also takes a very long time to find out what is truly foolproof and easily recoverable.
First of, you want your OS do be disposable. And just as the OS should be decoupled, all the things you run should be decoupled from one another. Don't build complex structures that take effort to rebuild. When you build something, that is state. You want to minimize the amount of state you need to keep track of. Ideally, the only state you should have is your payload data. That is impossible of course, but you get the idea.
Immutable distros are indeed the way to go for long term reliability. And ideally you want immutability by booting images (like coreOS or Fedora IoT). Distros like microOS are not really immutable, they still use the regular package manager. They only make it a little more reliable by encouraging flatpak/docker/etc (and therefore cutting down on packages managed by the package manager) and a slightly more controlled update-procedure (making them transactional). But ultimately, once your system is in some way defect, the package manager will build on top of that defect. So you keep carrying along that fault. In that sense it is not immune to "os drift" (well expressed), it is just that drift happens slower. "Proper" immutable distros that work with images are self healing, because once you rebase to another image (could be an update or a completely different distribution, doesn't matter), you have a fresh system that doesn't have anything to do with the previous image. Furthermore the new image does not get composed on your computer, it gets put together upstream. You only run the final result with you know is bit for bit what was tested by the distro maintainers. So microOS is like receiving a puzzle and a manual how to put it together, and gluing it in a frame is the "immutability". Updates are like losening the glue of specific pieces and gluing in new ones. In coreOS you receive the glued puzzle and do not have to do anything yourself. Updates are like receiving an entire new glued puzzle. This also comes down to the state idea: some mutable system that was set up a long time ago and even drifted a bit has a ton of state. A truly immutable distro has a very tiny state, it is merely the hash of the image you run, and your changes in /etc (which should be minimal and well documented!).
Also you want to steer clear from things like Proxmox and generally LXC-containers and VMs. All these are not immutable (let alone immune to drift), and you only burden yourself with maintaining more mutable things with tons of state.
Docker is a good way to run your stuff. Just make sure to put all the persistent data the belongs together in subfolders of a subvolume and snapshot that, and then backup these snapshots. That way you ensure that you meet the requirements for the data(base)'s ACID properties to work. Your "backups" will be corrupted otherwise, since they would be a wild mosaic from different points in time. To be able to roll back cleanly if an update goes wrong, you should also snapshot the image hash together with the persistent data. This way you can preserve the complete state of a docker service before updating. Here you also minimize the state: you only have your payload data, the image hash and your compose.yml.

You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:
The amount of data I'm handling fits on larger harddrives (so I don't need pools), but I don't want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.
I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.