this post was submitted on 09 Jul 2025
37 points (97.4% liked)

Selfhosted

49240 readers
671 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

Recently, I've been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

you are viewing a single comment's thread
view the rest of the comments
[–] moonpiedumplings@programming.dev 3 points 11 hours ago

Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.

Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.

More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?

So I switched to Kubernetes.

To answer some of your questions:

Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??

So what I (and the industry) uses is called "GitOps". It's essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.

Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher's Fleet or the most popular ArgoCD.

As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords to search for appropriate pieces of yaml.

I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?

So the first issue is that Kubernetes doesn't really have "containers". Instead, the smallest controllable unit in Kubernetes is a "pod", which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.

There are ways to convert a docker-compose to a kubernetes pod.

But in general, Kubernetes doesn't use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.

Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard

So what you're supposed to do is deploy an "ingress", (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress "objects".

Actually, traefik comes with it's own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.

Although it seems complex, I've come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.