koala

joined 4 months ago
[–] koala@programming.dev 11 points 1 day ago (1 children)

Reminder that you can go for hybrid approaches; receive email and host IMAP/webmail yourself, and send emails through someone like AWS. I am not saying you can't do SMTP yourself, but if you want to just dip your toes, it's an option.

You get many of the advantages; you control your email addresses, you store all of the email and control backups, etc.

...

And another thing: you could also play with https://chatmail.at/relays ; which is pretty cool. I had read about Delta Chat, but decided to play with it recently and... it's blown my mind.

[–] koala@programming.dev 1 points 2 days ago (1 children)

If you are going to run Jellyfin or some other media sharing, the key is if you need to transcode media (recompress because the playback device cannot handle it or not). Likely not, nowadays, but research that. If you need transcoding, research; you might get by with an old CPU, or maybe hardware transcoding support, but it's difficult.

Outside transcoding, for file sharing/streaming, every simultaneous client will require additional horsepower and disk transfer usage. If you are the sole client, then likely you can do with an old CPU. But if you and three people more in your household are going to be using the system at the same time, it might be a bit complex.

One of my home servers is a 4gb of RAM, with a "Intel(R) Celeron(R) CPU G1610T @ 2.30GHz". It's very old and low end, but for file sharing it works quite well, but it rarely has more than a single simultaneous user.

[–] koala@programming.dev 1 points 1 week ago

Yep, I do that on Debian hosts, EL (RHEL/Rocky/etc.) have a similar feature.

However, you need to keep an eye for updates that require a reboot. I use my own Nagios agent that (among other things) sends me warnings when hosts require a reboot (both apt/dnf make this easy to check).

I wouldn't care about last online/reboots; I just do some basic monitoring to get an alert if a host is down. Spontaneous reboots would be a sign of an underlying issue.

[–] koala@programming.dev 1 points 3 weeks ago

Remember that Google News has RSS feeds! They are very well hidden, but they are there.

However, they are also a bit bad.

I started https://github.com/las-noticias/news-rss to postprocess a bit Google News RSS feeds and also play with categorization. I found spaCy worked well to find "topics", but unfortunately I lost steam.

[–] koala@programming.dev 3 points 3 weeks ago

I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it's such a popular service among self-hosters that I have little doubt that you'll find a workable process.

(And likely you could cheat, and set up a small Linux VM to "bridge" k8s and Cloudflare Tunnels.)

Kubernetes is different, but it's learnable. In my opinion, K8S only comes into its own in a few scenarios:

  • Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.

  • Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you... in a way that works in all K8S implementations! This is also very cool, but I suspect that there's not a lot of this in self-hosting.

  • Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.

Like the person you're replying to, I also run Talos (as a VM in Proxmox). It's pretty cool. But in the end, I only run there 4 apps I've written myself, so using K8S as a kind of SaaS... and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.

I also do this for learning. Although I'm not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you'll have fun!

[–] koala@programming.dev 1 points 3 weeks ago

I haven't tested this, but I would expect there to be ways to do it, esp for VMs if they are not LXC containers.

(I try to automate provisioning as much as possible, so I don't do this kind of stuff often.)

The Incus forum is not huge, but it's friendly, and the authors are quite active.

[–] koala@programming.dev 6 points 3 weeks ago (2 children)

Came in here to mention Incus if no one had.

I love it. I have three "home production" servers running Proxmox, but mostly because Proxmox is one of very few LTS/comercially-supported ways to run Linux in a supported way with root (and everything else on ZFS). And while its web UI is still a bit clunky in places, it comes in handy some times.

However, Incus automation is just... superior. incus launch --vm images:debian/13 foo, wait a few seconds then incus exec foo -- bash and I'm root on a console of a ready-to-go Debian VM. Without --vm, it's a lightweight LXC container. And Ansible supports running commands through incus exec, so you can provision stuff WITHOUT BOTHERING TO SET UP ANYTHING.

AND, it works remotely without fuss, so I can set up an Incus remote on a beefy server and spawn VMs nearly transparently. + incus file pull|push to transfer files.

I'm kinda pondering scripting removal of the Proxmox bits from a Proxmox install, so that I just keep their ZFS support and run Incus on top.

[–] koala@programming.dev 2 points 1 month ago (1 children)

If you speak Spanish, a month ago or so I was pointed at https://foro.autoalojado.es/, might be interesting to discuss the in-person stuff, although it doesn't seem like it's reaching a critical mass of activity :(

[–] koala@programming.dev 3 points 2 months ago

Incus has a great selection of images that are ready to go, plus gives scripted access to VMs (and LXC containers) very easily; after incus launch to create a VM, incus exec can immediately run commands as root for provisioning.

[–] koala@programming.dev 4 points 3 months ago

Nextcloud is in EPEL 10. You'll get updates along with the rest of the OS.

I have been using EPEL 9 Nextcloud for a good while and it's been a smooth experience.

If you want specifically Docker, I would not choose an EL10 distro, really. I have been test driving AlmaLinux 10 and it's pretty nice, but I would look elsewhere.

[–] koala@programming.dev 3 points 3 months ago

IMHO, it really depends on the specific services you want to run. I guess you are most familiar with Docker and everything that you want to run has a first-class-citizen Docker container for it. It also depends on whether the services you want to run are suitable for Internet exposure or not (and how comfortable you are with the convenience tradeoff).

LXC is very different. Although you can run Docker nested within LXC, you gotta be careful because IIRC, there are setups that used to not work so well (maybe it works better now, but Docker nested within LXC on a ZFS file system used to be a problem).

I like that Proxmox + LXC + ZFS means that it's all ZFS file systems, which gives you a ton of flexibility; if you have VMs and volumes, you need to assign sizes to them, resize if needed, etc.; with ZFS file systems you can set quotas, but changing them is much less fuss. But that would likely require much more effort for you. This is what I use, but I think it's not for everyone.

[–] koala@programming.dev 1 points 3 months ago

I don't use Nextcloud calendars or address books. But I assume they are included in regular backups.

I pay about 50€ for all absolute overkill Hetzner dedicated server (128gb of RAM).

I live in two different flats in different cities because of personal circumstances.

view more: next ›