Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
This is very cool, but also very dangerous. Many projects release versions that need some sort of manual intervention to be updated, and automatically updating to new versions on docker can lead to data loss in those situations.
Here’s a recent example from Immich:
https://github.com/immich-app/immich/releases/tag/v1.133.0
It is my humble opinion that teaching newbies to do automatic updates will cause them to lose data and break things, which will probably sour them from ever self hosting again.
Automatic OS updates are fine, and docker update notifications are fine, but automatic docker updates are just too dangerous.
That's reasonable, however, my personal bias is towards security and I feel like if I don't push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a "backups with Syncthing" article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
My experience after 35 years in IT: I've had 10x more outages caused by automatic updates than everything else combined.
Also after 35 years of running my own stuff at home, and practically never updating anything, I've never had an outage caused by a lack of updates.
Let's not act like auto updates is without risk. Just look at how often Microsoft has to roll out a fix for something an update broke. Inexperienced users are going to be clueless when an update breaks something.
We should be teaching new people how to manage systems, this includes proper update checks on a cycle, with appropriate validation that everything works afterwards, and the ability to roll back if there's an issue.
This isn't an Enterprise where you simply can't manually manage updates across hundreds or thousands of servers, and tens of thousands of workstations - this is a single admin, small environment.
I do monthly update checks, update where I feel it's warranted, and verify systems afterwards.
I don't disagree with any of that, I'm merely making a different value judgement - namely that a breach that could've been prevented by automatic updates is worse than an outage caused by the same.
I will however make this choice more explicit in the articles and outline the risks.
Don't expose anything outside of the tailnet and 99% of the potential problems are gone. Noobs should not expose services across a firewall. Period.
with properly limited access the breach is much, much less likely, and an update bringing down an important service at the bad moment does not need to be a thing