this post was submitted on 11 Jun 2025
362 points (97.1% liked)
Technology
71415 readers
2735 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'll try arguing in the opposite direction for the sake of it:
An "AI", if not specifically tweaked, is just a bullshit machine approximating reality same way human-produced bullshit does.
A human is a bullshit machine with an agenda.
Depending on the cost of decisions made, an "AI", if it's trained on properly vetted data and not tweaked for an agenda, may be better than a human.
If that cost is high enough, and so is the conflict of interest, a dice set might be better than a human.
There are positions where any decision except a few is acceptable, yet malicious humans regularly pick one of those few.
Your argument becomes idiotic once you understand the actual technology. The AI bullshit machine's agenda is "give nice answer" ("factual" is not an idea that has neural center in the AI brain), and "make reader happy". The human "bullshit" machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).
It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.
LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.
Please don't even consider LLMs as "AI".
Even an RNG does decision-making.
I know what LLMs are, thank you very much!
If you wanted to even understand my initial point, you already would have.
Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.
I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.
You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.
No, I don't seem that. I don't like being ascribed opinions I haven't expressed.
When your goal is to avoid a certain most harmful subset of such decisions, and living humans always being pressured by power and corrupt profit to pick that subset, flipping coins is preferable, if that's the two variants between which we are choosing.
It kinda seems like you don’t understand the actual technology.