this post was submitted on 15 Sep 2025
78 points (89.0% liked)
Technology
75151 readers
2313 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I run local LLMs and they cost me $0 per query. I don't plan to charge myself more than that at any point, even if the AI bubble bursts.
Realy? I get what you want to say, but at least the power consumption of the machine you need the model to run on will be yours forever. Depending on your energy price it is not 0 per query.
It's so near zero it makes no difference. It is not a noticeable factor in my decision on whether to use it or not for any given task.
The training of a brand new model is expensive, but once the model has been created it's cheap to run. If OpenAI went bankrupt tomorrow and shut down the models it had trained would just be sold off to other companies and they'd run them instead, free from the debt burden that OpenAI accrued from the research and training costs that went into producing them. That's actually a fairly common pattern for first-movers like that, they spend a lot of money blazing the trail and then other companies follow along afterwards and eat their lunch.
That's great if they actually work. But my experience with the big, corporate-funded models has been pretty freaking abysmal after more than a year of trying to adopt them into my daily workflow. I can't imagine the performance of local models is better when they're running on much, much smaller datasets and with much, much less computing power.
I'm happy to be proven wrong, of course, but I just don't see how it's possible for local models to compete with the Big Boys in terms of quality... and the quality of the largest models is only middling at best.
You're free to not use them. Seems like an awful lot of people are using them, though, including myself. They must be getting something out of using them or they'd stop too.