this post was submitted on 16 Dec 2025
50 points (94.6% liked)

Selfhosted

53705 readers
782 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I am working on setting up a home server but I want it to be reproducible if I need to make large changes, switch out hardware, or restore from a failure. What do you use to handle this?

all 41 comments
sorted by: hot top controversial new old
[–] emerald@lemmy.blahaj.zone 21 points 1 day ago

How do you manage your home server configuration

Poorly, which is to say that I just let borgmatic back up all my compose files and hope for the best

[–] RheumatoidArthritis@mander.xyz 19 points 1 day ago (1 children)

Git controlled docker-compose files and backed up docker data volumes.pretty easy to go back to a point in time.

That's actually a really good idea. From now on I will do the same. Thanks!

[–] Object@sh.itjust.works 16 points 1 day ago* (last edited 1 day ago)

reproducible

You tried writing bash scripts that set things up for you, haven't you? It's NixOS for you.

[–] adf@lemmy.world 12 points 1 day ago (1 children)
[–] xyx@sh.itjust.works 4 points 1 day ago (1 children)

Out of curiosity: Are you running nix-ops with nix-secrets or how did you cover orchestration & credentials?

[–] adf@lemmy.world 4 points 1 day ago

I use flakes and all hosts are configured from a single flake, where each host has its own configuration. I have some custom modules and even custom package in the same flake. I also use home manager. I have 4 hosts managed in total: home server, laptop, gaming PC, and a cloud server. All hosts were provisioned using nixos-anywhere + disko, except for the first one which was installed manually. For secrets I use sops-nix, encrypted secrets are stored in the same flake/repo.

[–] yah@lemmy.powerforme.fun 10 points 1 day ago

With NixOS, you get a reproducible environment. When you need to change your hardware, you simply back up your data, write your NixOS configuration, and you can reproduce your previous environment.

I use it to manage all my services.

[–] freeearth@discuss.tchncs.de 8 points 1 day ago

NixOS for configuration and restic for data

[–] non_burglar@lemmy.world 7 points 1 day ago

Incus and ansible

[–] lka1988@lemmy.dbzer0.com 6 points 20 hours ago
[–] giacomo@lemmy.dbzer0.com 6 points 1 day ago

systemd unit files, because its all podman containers.

[–] atzanteol@sh.itjust.works 5 points 1 day ago (1 children)

Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.

[–] Anonymouse@lemmy.world 2 points 1 day ago (1 children)

How do you decide what's for Terraform and what's for Ansible?

[–] atzanteol@sh.itjust.works 1 points 2 hours ago

They're good at different things.

Terraform is better at "here is a configuration file - make my infrastructure look like it" and Ansible is better at "do these things on these servers".

In my case I use Terraform to create proxmox VMs and then Ansible provisions and configures software on those VMs.

[–] dontsayaword@piefed.social 4 points 1 day ago* (last edited 1 day ago)

I used to have a fille with every cli command and notes on how each thing was set up. When I had to reinstall it from scratch it took all day going through lots of manual steps and remembering how it should all go.

Recently I converted the whole thing to Ansible. Now I could rebuild my entire system on a brand new OS installation with one command that completes in minutes. It's all modular and I can add new services easily whether they are docker containers or scripts or whatever. If I ever break anything, it will reset everything to its intended state and leave it alone otherwise. And it's free and pretty easy to learn and start using.

Plus I use git along with it for version control, so I can always revert to any previous configuration instantly.

[–] thirdBreakfast@lemmy.world 4 points 1 day ago (2 children)

Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC's are cloned from a golden image that has my keys, tailscale setup etc.

[–] eli@lemmy.world 2 points 1 day ago (1 children)

This is pretty much my setup as well. Proxmox on bare metal, then everything I do are in Ubuntu LXC containers, which have docker installed inside each of them running whatever docker stack.

I just installed Portainer and got the standalone agents installed on each LXC container, so it's helped massively with managing each docker setup.

Of course you can do whatever base image you want for the LXC container, I just prefer Ubuntu for my homelab.

I do need to setup a golden image though to make stand-ups easier...one thing at a time though!

[–] radiogen@lemmy.zip 1 points 1 day ago

So you make in proxmox container (LXC) the docker container?

Snapshots largely, most everything is VMs and docker containers. I have one VM set aside for dev work to test configs before updating the prod boxes as well.

[–] Seefoo@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

I use git and commit configs/setup/scripts/etc. to it. I at least have a road map for how to get everything back this way. Testing this can be difficult, but it really depends on what you care about really.

  • Testing my kopia backups of important data? that I manually test every once n' while.
  • Testing if my ZFS setup script is 100% identical to my setup? that's not that important, as long as I have a general idea I can figure out the gaps and improve the script for the next time around. Obviously, you can spend a lot more time ensuring scripts and what not stays consistent, but it depends on what you care about!

For a lot of my service config, git has always worked well for me and I can go back to older configs if needed. You can get super specific here and save versions in git, then have something update the versions (e.g. WUD)

[–] _cryptagion@anarchist.nexus 3 points 1 day ago (1 children)

Well I use Unraid, so I just back up my whole config folder along with the OS itself in case I need to flash it to a new USB. In other words, I just clone the whole thing. It means I can be up and running in a few minutes if everything was corrupted.

A data drive loss is pretty simple too, I just simulate the lost data until I can get a new HDD in. That takes a little longer to fix tho.

[–] turmacar@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

I think it gets some flak but I've been super happy with Unraid.

Migrated hardware by moving the usb drive over to the new system and it didn't blink that everything but the HDDs was different. Just booted up and started the array and dockers. The JBOD functionality is great. Drive loss is just an excuse to add a bigger drive.

[–] BCsven@lemmy.ca 3 points 1 day ago

MicroOS is a decent choice, because it can cold boot off a configuration that uses ignition and combustion files. https://microos.opensuse.org/

And they have this file configurator so you don't have to manually type all the syntax for your configs.

https://opensuse.github.io/fuel-ignition/edit

[–] i_stole_ur_taco@lemmy.ca 2 points 1 day ago

I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.

The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.

I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.

[–] eager_eagle@lemmy.world 2 points 1 day ago

I'm the only user of my setup, but I configure docker compose stacks, use configs as bind mounts, and track everything in a git repo synchronized every now and then.

[–] relaymoth@sh.itjust.works 2 points 1 day ago (1 children)

I went the nuclear option and am using Talos with Flux to manage my homelab.

My source of truth is the git repo with all my cluster and application configs. With this setup, I can tear everything down and within 30 min have a working cluster with everything installed automatically.

[–] radiogen@lemmy.zip 1 points 1 day ago (2 children)

Are you using selfhosted git? Which one?

[–] moonpiedumplings@programming.dev 2 points 13 hours ago

I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.

This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux...

[–] relaymoth@sh.itjust.works 1 points 18 hours ago

I've got a forgejo instance setup but I haven't migrated everything to it yet.

[–] corsicanguppy@lemmy.ca 2 points 1 day ago* (last edited 19 hours ago)

Packer builds the terraformable/openTofuable templates to launch into the hypervisor where chef (eventually mgmtConfig) will manage them from there until they die.

All that is launched by git. Fire and forget. Updates are cronned.

There are no containers. Don't got time to fuck about. If Systemd wasn't an absolute embarrassment I'd not worry about updates even as much as I do, which isn't much aside from the aforementioned cancer.

[–] xcjs@programming.dev 2 points 19 hours ago
[–] fruitycoder@sh.itjust.works 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I'm not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.

For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.

That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.

Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.

[–] fruitycoder@sh.itjust.works 1 points 9 hours ago

Definitely overkill lol. But I like it. Haven't found a more complete solutions that doesn't feel like a comp sci dissertation yet.

The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for "free". Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).

[–] realitaetsverlust@piefed.zip 1 points 1 day ago (1 children)

Terraform and Puppet. Not very simple to get into, but extremely powerful and reliable.

[–] 4am@lemmy.zip 1 points 1 day ago

I was getting into a similar form with Terraform (well, OpenTOFU now) and Ansible before I had to pack up my homelab about a year ago. New place needs electrical work before I can fire it back up.

How is Puppet to work with?

[–] paris@lemmy.blahaj.zone 1 points 1 day ago

Recently switched to ucore. While I cannot for the life of me get SELinux to let my containers run without Permissive mode (my server was previously Endeavour OS and either didn't have it or I disabled it long ago), I've otherwise had great success.

The config is a single yaml file that gets converted into a json file for Ignition, which sets everything up on first boot. It's an OCI-based immutable distro with automatic updating, so I can mostly just leave it to its own devices and everything has been smooth for the first week I've been using it.

My Docker root directory is on a separate drive with plenty of space, so setting up involves directing Docker to that new root directory and basically being done (which my Ignition config handles for me).

[–] irmadlad@lemmy.world 1 points 1 day ago (1 children)

I use snapshots, once a month an image is made of the entire drive, and I have Duplicati that backs up to cloud. Whatever choice you make tho, remember 3,2,1, and backups are useless unless tested on a regular basis. The test portion always gives me anxiety.

[–] MonkeMischief@lemmy.today 10 points 1 day ago (1 children)

I'd really like to know if there's any practical guide on testing backups without requiring like, a crapton of backup-testing-only drives or something to keep from overwriting your current data.

Like I totally understand it in principle just not how it's done. Especially on humble "I just wanna back up my stuff not replicate enterprise infrastructure" setups.

[–] irmadlad@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

You can use qemu utilities to convert your Linux disk image to VDI which you can then import into VM Workstation or Virtualbox:

qemu-img convert -f qcow2 -O vdi your-image.qcow2 your-image.vdi

One thing you might run into is that Ubuntu server images often use VirtIO drivers, So you may have to make adjustments for that. Or you may run into the need for other drivers that VM Workstation or VirtualBox don't provide.

https://documentation.ubuntu.com/server/how-to/virtualisation/qemu/#qemu

https://systemadministration.net/converting-virtual-disk-images-qemu-img/

ETA: There is also StarWind V2V Converter