this post was submitted on 02 Jun 2025
667 points (98.8% liked)
Programmer Humor
24297 readers
571 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...
A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"
There's nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan's playbook. Other companies saw that.
Edit: wrong comment