this post was submitted on 09 Jul 2025
29 points (100.0% liked)

Selfhosted

49240 readers
639 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

Recently, I've been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

you are viewing a single comment's thread
view the rest of the comments
[–] towerful@programming.dev 0 points 5 hours ago* (last edited 5 hours ago) (2 children)

Everyone talks about helm charts.
I tried them and hate writing them.
I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
But if it's a solo project or consuming other people's projects, I don't think it really solves a problem.

Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
Though, if you are used to ansible, it might make more sense to use ansible.
Pretty sure ansible will be able to do it all in a way you are familiar with.

As for writing the manifests themselves, I find it rare I need to (unless it's something I've made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
If there aren't helm charts or kustomize files, then it's adapting a docker compose file into manifests. Which is manual.
Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).

I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.

The way I use kubernetes for the projects I do is:
Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
Then apply all my pods, services, certificates etc from hand written manifests.
Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.

Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.

However, I have recently found https://cdk8s.io/ and I'm meaning to investigate that for creating the manifests themselves.
Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.

[–] moonpiedumplings@programming.dev 2 points 4 hours ago (1 children)

garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.

Here is an example of authentik deployed using helm and fluxcd.

[–] towerful@programming.dev 1 points 3 hours ago (1 children)

Interesting, I might check them out.
I liked garden because it was "for kubernetes". It was a horse and it had its course.
I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.

I'm willing to re-evaluate my deployment stack, tbh.
I'll definitely dig more into flux and ansible.
Thanks!

that all those CD tools were specifically tailored to run as workers in a deployment pipeline

That's CI 🙃

Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install "helmreleases" but argo has something similar.

[–] jlh@lemmy.jlh.name 2 points 5 hours ago (1 children)

helm charts are awful, i didn't really like cdk8s either tbh. I think the future "package format" might be operators or Crossplane Composite Resources

[–] towerful@programming.dev 1 points 3 hours ago

Oh, operators are absolutely the way for "released" things.

But on bigger projects with lots of different pods etc, it's a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
Similar to helm charts, I don't see the point for personal projects. I'm not sharing it with anyone, I don't need helm/operator abstraction for it.
And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are "doing the right thing" before slinging it into k8s.