this post was submitted on 30 Aug 2025
200 points (88.2% liked)

Selfhosted

51206 readers
438 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

My rack is finished for now (because I'm out of money).

Last time I posted I had some jank cables going through the rack and now we're using patch panels with color coordinated cables!

But as is tradition, I'm thinking about upgrades and I'm looking at that 1U filler panel. A mini PC with a 5060ti 16gb or maybe a 5070 12gb would be pretty sick to move my AI slop generating into my tiny rack.

I'm also thinking about the PI cluster at the top. Currently that's running a Kubernetes cluster that I'm trying to learn on. They're all PI4 4GB, so I was going to start replacing them with PI5 8/16GB. Would those be better price/performance for mostly coding tasks? Or maybe a discord bot for shitposting.

Thoughts? MiniPC recs? Wanna bully me for using AI? Please do!

top 50 comments
sorted by: hot top controversial new old
[–] GirthBrooks@lemmy.world 23 points 1 week ago (1 children)

collapsed inline media

Looking good! Funny I happen across this post when I’m working on mine as well. As I type this I’m playing with a little 1.5” transparent OLED that will poke out of the rack beside each pi, scrolling various info (cpu load/temp, IP, LAN traffic, node role, etc)

[–] ripcord@lemmy.world 4 points 6 days ago (1 children)

What OLED specifically and what will you be using to drive it?

[–] GirthBrooks@lemmy.world 2 points 6 days ago

Waveshare 1.51” transparent OLED. Comes with driver board, ribbon & jumpers. If you type it in Amazon it’s the only one that pops, just make sure it says transparent. Plugs into GPIO of my Pi 5s. The Amazon listing has a user guide you can download so make sure to do that. I was having trouble figuring it out until I saw that thing. Runs off a python script but once I get it behaving like I want I’ll add it to systemd so it boots on startup.

Imma dummy so I used ChatGPT for most of it, full ..ahem.. transparency. 🤷🏻‍♂️

I’m modeling a little bracket in spaceclaim today & will probably print it in transparent PETG. I’ll post a pic when I’m done!

[–] lepinkainen@lemmy.world 11 points 5 days ago

This will get downvoted to oblivion because this is Lemmy:

Get a Mac Mini. Any M-series model with 32GB of memory will run local models at decent speeds and will be cheaper than just a 5xxx series GPU

And it’ll fit your cool rack 😀

[–] Diplomjodler3@lemmy.world 9 points 1 week ago (1 children)

I'm afraid I'm going to have to deduct one style point for the misalignment of the labels on the mini PCs.

[–] nagaram@startrek.website 10 points 1 week ago (1 children)

That's fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.

I'm man feeding orphans to the orphan crushing machine. I can stop this at any moment.

[–] Diplomjodler3@lemmy.world 7 points 1 week ago

The machine must keep running!

[–] 6nk06@sh.itjust.works 8 points 1 week ago (2 children)
[–] nagaram@startrek.website 12 points 1 week ago

These are M715q Thinkcentres with a Ryzen Pro 5 2400GE

[–] nagaram@startrek.website 7 points 1 week ago

Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.

I'm a huge fan of this all in one idea that is upgradable.

[–] hendrik@palaver.p3x.de 8 points 6 days ago (2 children)

Well, I always advocate for using the stuff you have. I don't think a Discord bot needs four new RasPi 5. That's likely to run on a single RasPi3. And as long as they're sitting idle, it doesn't really matter which model number they have... So go ahead and put something on your hardware, and buy new one once you've maxed out your current setup.

I'm not educated on Bazzite. Maybe tools like Distrobox or other container solutions can help running AI workloads on the gaming rig. It's likely easier to run a dedicated AI server, but I started learning about quantization, tested some models on my main computer with the help of ollama, KoboldCPP and some random Docker/Podman containers. I'm not saying this is the preferrable solution. But definitely enough to get started with AI. And you can always connect the computers within your local network, write some server applications and have them hook into ollama's API and it doesn't really matter whether that runs on your gaming pc or a server (as long as the computer in question is turned on...)

[–] nagaram@startrek.website 5 points 6 days ago (2 children)

Ollama and all that runs on it its just the firewall rules and opening it up to my network that's the issue.

I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.

I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn't open on my network.

load more comments (2 replies)
[–] Flax_vert@feddit.uk 3 points 5 days ago

You could probably run several discord bots on a Raspberry Pi 3, provided they aren't public and popular

[–] ZeDoTelhado@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (4 children)

I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here

[–] teslasdisciple@lemmy.ca 14 points 1 week ago (2 children)

I'm running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.

As for the "how", I use Ollama and Open WebUI. It's pretty easy to set up.

[–] kata1yst@sh.itjust.works 5 points 1 week ago* (last edited 6 days ago)

Similar setup here with a 7900xtx, works great and the 20-30b models are honestly pretty good these days. Magistral, Qwen 3 Coder, GPT-OSS are most of what I use

load more comments (1 replies)
[–] chaospatterns@lemmy.world 6 points 1 week ago

Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it'll work.

Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.

[–] nagaram@startrek.website 5 points 1 week ago (1 children)

With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It's much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven't found the need to do that yet in my use case.

As you may have guessed, I can't fit a 3060 in this rack. That's in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn't try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.

But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven't noticed a difference in quality between my local LLM and the web based stuff.

[–] ZeDoTelhado@lemmy.world 2 points 1 week ago (1 children)

So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.

I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.

[–] nagaram@startrek.website 3 points 1 week ago* (last edited 1 week ago)

Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn't great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.

A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100

load more comments (1 replies)
[–] InternetCitizen2@lemmy.world 6 points 1 week ago
[–] Korhaka@sopuli.xyz 6 points 6 days ago

Ohh nice, I want it. Don't really know what I would use all of it for, but I want it (but don't want to pay for it).

Currently been thinking of getting an N150 mini PC. Setup proxmox and a few VMs. At the very least pihole, location to dump some backups and also got a web server for a few projects.

[–] thejml@sh.itjust.works 5 points 1 week ago* (last edited 1 week ago)

Honestly, If you are delving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi's. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD's and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L's with 32GB and a 512GbB SSD each and its been great. The other one became my wife's new desktop.

Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.

[–] tofu@lemmy.nocturnal.garden 4 points 1 week ago (1 children)

Since you seem to be looking for problems to solve with new hardware, do you have a NAS already? Could be tight in 1U but maybe you can figure something out.

[–] nagaram@startrek.website 2 points 1 week ago

I do already have a NAS. It's in another box in my office.

I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.

How do I do that? Good question. I take suggestions.

[–] possiblylinux127@lemmy.zip 4 points 1 week ago (1 children)

You also could pickup a powerful CPU with lots of memory bandwidth like a threadripper

[–] nagaram@startrek.website 4 points 1 week ago (1 children)

I think I'm going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

[–] Cocodapuf@lemmy.world 2 points 6 days ago* (last edited 6 days ago)

I think I'm going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

Well, you could always use a closed loop CPU cooler. (Not necessarily that one)

With the radiator hanging out in back, this shouldn't need much height.

[–] Colloidal@programming.dev 4 points 1 week ago (1 children)

You could combine both 1U fillers and install a 2U PC, which would be easier to find.

[–] nagaram@startrek.website 2 points 1 week ago

I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.

[–] TexasDrunk@lemmy.world 4 points 1 week ago

I didn't even know these sorts of mini racks existed. now I'm going to have to get one for all my half sized preamps if they'll fit. That would solve like half the problems with my studio room and may help bring back some of my spark for making music.

I have no recs. Just want to say I'm so excited to see this. I can probably build an audio patch panel.

[–] brucethemoose@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (1 children)

If you can swing $2K, get one of the new mini PCs with an AMD 395 and 64GB+ RAM (ideally 128GB).

They're tiny, lower power, and the absolute best way to run the new MoEs like Qwen3 or GLM Air for coding. TBH they would blow a 5060 TI out of the water, as having a ~100GB VRAM pool is a total game changer.

I would kill for one on an ITX mobo with an x8 slot.

[–] princessnorah@lemmy.blahaj.zone 4 points 1 week ago (2 children)
[–] MalReynolds@piefed.social 3 points 1 week ago (1 children)

Pretty sure that's a x4 PCIe slot (admittedly PCIe 5x4, but not many video cards speak PCIe5), would totally trade a usb4 for a x8, but these laptop chips are pretty constrained lanes wise.

[–] brucethemoose@lemmy.world 4 points 6 days ago* (last edited 6 days ago) (1 children)

It's PCIe 4.0 :(

but these laptop chips are pretty constrained lanes wise

Indeed. I read Strix Halo only has 16 4.0 PCIe lanes in addition to its USB4, which is resonable given this isn't supposed to be paired with discrete graphics. But I'd happily trade an NVMe slot (still leaving one) for x8.

One of the links to a CCD could theoretically be wired to a GPU, right? Kinda like how EPYC can switch its IO between infinity fabric for 2P servers, and extra PCIe in 1P configurations. But I doubt we'll ever see such a product.

[–] MalReynolds@piefed.social 2 points 6 days ago (1 children)

It's PCIe 4.0 :(

Boo! Silly me thinking DDR5 implied PCIe5, what a shame.

Feels like they're testing the waters with Halo, hopefully a loud 'waters great, dive in' signal gets through and we get something a bit fitter for desktop use, maybe with more memory (and bandwidth) next gen. Still, gotta love the power usage, makes for one hell of a NAS / AI inference server (and inference isn't that fussy about PCIe bandwidth, hell eGPU works fine as long as the model / expert fits in VRAM.

[–] brucethemoose@lemmy.world 2 points 6 days ago* (last edited 6 days ago) (8 children)

Rumor is it’s successor is 384 bit, and after that their designs are even more modular:

https://www.techpowerup.com/340372/amds-next-gen-udna-four-die-sizes-one-potential-96-cu-flagship

Hybrid inference prompt processing actually is pretty sensitive to PCIe bandwidth, unfortunately, but again I don’t think many people intend on hanging an AMD GPU off these Strix Halo boards, lol.

load more comments (8 replies)
[–] brucethemoose@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Nah, unfortunately it is only PCIe 4.0 4x. That's a bit slim for a dGPU, especially in the future :(

[–] Flax_vert@feddit.uk 2 points 5 days ago (1 children)
[–] nagaram@startrek.website 2 points 5 days ago

The Lenovo Thinkcentre M715q were $400 total after upgrades. I fortunately had 3 32 GB kits of ram from my work's e-waste bin but if I had to add those it would probably be $550 ish The rack was $120 from 52pi I bought 2 extra 10in shelves for $25 each the Pi cluster rack was also $50 (shit I thought it was $20. Not worth) Patch Panel was $20 There's a UPS that was $80 And the switch was $80

So in total I spent $800 on this set up

To fully replicate from scratch you would need to spend $160 on raspberry pis and probably $20 on cables

So $1000 theoratically

load more comments
view more: next ›