nagaram

joined 2 years ago
[–] nagaram@startrek.website 2 points 1 week ago

I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.

[–] nagaram@startrek.website 3 points 1 week ago* (last edited 1 week ago)

Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn't great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.

A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100

[–] nagaram@startrek.website 3 points 1 week ago

A mac is a very funny and objectively correct option

[–] nagaram@startrek.website 4 points 1 week ago (1 children)

I think I'm going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

[–] nagaram@startrek.website 2 points 1 week ago

I do already have a NAS. It's in another box in my office.

I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.

How do I do that? Good question. I take suggestions.

[–] nagaram@startrek.website 5 points 1 week ago (2 children)

With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It's much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven't found the need to do that yet in my use case.

As you may have guessed, I can't fit a 3060 in this rack. That's in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn't try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.

But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven't noticed a difference in quality between my local LLM and the web based stuff.

[–] nagaram@startrek.website 10 points 1 week ago (1 children)

That's fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.

I'm man feeding orphans to the orphan crushing machine. I can stop this at any moment.

[–] nagaram@startrek.website 7 points 1 week ago

Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.

I'm a huge fan of this all in one idea that is upgradable.

[–] nagaram@startrek.website 12 points 1 week ago

These are M715q Thinkcentres with a Ryzen Pro 5 2400GE

[–] nagaram@startrek.website 11 points 1 week ago (2 children)

Not much. As much as I like LLMs, I don't trust them for more than rubber duck duty.

Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I've read so it can draw from that when I ask it questions.

The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite's "No I won't let you break your computer" philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I'd want it to be better than my current server.

[–] nagaram@startrek.website 13 points 2 weeks ago

Alternative

looks inside The absolute least educated read you've ever seen

[–] nagaram@startrek.website 3 points 2 weeks ago

Fuck, we were talking to an expert

view more: ‹ prev next ›