testgoofy

joined 2 years ago
[–] testgoofy@infosec.pub 2 points 6 hours ago (2 children)

Yes, just running a random script from the internet is a very bad idea. You should also not copy and paste the command from above, since I'm only a random lemmy user. Nevertheless, if you trust k3s, and they promote this command on the official website (make sure it's the official one) you can use it. As you want to install k3s, I'm going to assume you trust k3s.

If you want to review the script, go for it. And you should, I agree. I for myself reviewed (or at least looked over it) when I used the script for myself.

For the uninstallment: just follow the instructions on the official website and run /usr/local/bin/k3s-uninstall.sh source

[–] testgoofy@infosec.pub 3 points 9 hours ago (1 children)

I agree. k8s and helm have a steep learning curve. I have an engineering background and understand k8s in and out. Therefore, for my helm is the cleanest solution. I would recommend getting to know k8s and it's resources, before using (or creating) helm charts.

[–] testgoofy@infosec.pub 2 points 9 hours ago* (last edited 6 hours ago) (5 children)

Hey there,

I made a similar journey a few years ago. But I only have one home server and do not run my services in high availability (HA). As @non_burglar@lemmy.world mentioned, to run a service in HA, you need more than "just scaling up". You need to exactly know what talks when to whom. For example, database entries or file writes will be difficult when scaling up a service not ready for HA.

Here are my solutions for your challenges:

  • No, you are not supposed to run kubectl apply -f for each file. I would strongly recommend helm. Then you just have to run helm install per service. If you want to write each service by yourself, you will end up with multiple .yaml files. I do it this way. Normally, you create one repository per service, which holds all YAML files. Alternatively, you could use a predefined Helm Chart and just customize the settings. This is comparable to DockerHub.
  • If you want to deploy to a cluster, you just have to deploy to one server. If in your .yaml configuration multiple replicas are defined, k8s will automatically balance these replicas on multiple servers and split the entire load on all servers in the same cluster. If you just look for configuration examples, look into Helm Charts. Often service provide examples only for Docker (and Docker Compose) and not for K8s.
  • As I see it, you only have to run a single line of install script on your first server and afterward join the cluster with the second server. Then you have k3s deployed. Traefik will be installed alongside k3s. If you want to access the dashboard of Traefik and install rancher and longhorn, yes, you will have to run multiple installations. Since you already have experience with Ansible, I suggest putting everything for the "base installation" into one playbook and then executing this playbook one.

Changelog:

  • Removeing k3s install command. If you want to use it, look it up on the official website. Do not copy paste the command from a random user on lemmy ;) Thanks to @atzanteol@sh.itjust.works for bringing up this topic.