corroded

joined 2 years ago
[–] corroded@lemmy.world 2 points 21 hours ago (1 children)

Personally, I'd really like the option of running LLMs locally, but the hardware requirements make it hard. Small models run okay on CPU or low-end GPUs, but anything approaching the complexity and usefulness of GPT4 or DeepSeek requires a hefty GPU setup. Considering how much even old hardware like the P40 has gone up in price, it's hard to justify the cost.

[–] corroded@lemmy.world 5 points 22 hours ago (5 children)

What's the deal with OpenAI and xAI? Apparently he is no longer on the board of OpenAI but is still a financial backer. Yet he's also starting a company to compete directly with them. Why sabotage his own interests?

[–] corroded@lemmy.world 1 points 1 week ago

I believe you're correct. I didn't realize that I had my containers set to privileged. That would explain why I've never had issues with mounting shares.

[–] corroded@lemmy.world 1 points 1 week ago (1 children)

I'm sorry, I think I gave you bad information. I have my containers set to unprivileged=no. I forgot about the "double negative" in how that flag was described.

So apparently my containers are privileged, so I don't think I've ever tried to do what you are doing.

[–] corroded@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (5 children)

I'm leaving this here for continuity, but don't follow what I said here. I have my containers set as privileged. I was wrong.

I have a server that runs Proxmox and a server that runs TrueNAS, so a very similar setup to yours. As long as your LXC is tied to a network adapter that has access to your file server (it almost certainly is unless you're using multiple NICs and/or VLANs), you should be able to mount shares inside your LXC just like you do on any other Linux machine.

Can you ping your fileserver from inside the container? If so, then the issue is with the configuration in the container itself. Privileged or unprivileged shouldn't matter here. How are you trying to mount the CIFS share?

Edit: I see that you're mounting the share in Proxmox and mapping it to your container. You don't need to do this. Just mount it in the container itself.

[–] corroded@lemmy.world 8 points 1 week ago (1 children)

I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.

[–] corroded@lemmy.world 13 points 1 week ago (4 children)

They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.