this post was submitted on 25 Nov 2025
769 points (98.9% liked)
Programmer Humor
27490 readers
1620 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Gemini once told me to "please wait" while it did "further research". I responded with, "that's not how this works; you don't follow up like that unless I give you another prompt first", and it was basically like, "you're right but just give me a minute bro". ๐คฆ
Out of all the LLMs I've tried, Gemini has got to be the most broken. And sadly that's the one LLM that your average person is exposed the most to, because it's in nearly every Google search.
I'd argue that Gemini is actually really good at summarizing a Google search, filtering the trash from it, and convincing people not to click the actual links that is how Google makes money.
Yeah but when it's a total crapshoot as to whether or not its summary is accurate, you can't trust it. I adblocked those summaries cause they're useless.
At least some of the competing AIs show their work. Perplexity cites its sources, and even ChatGPT recently added that ability as well. I won't use an LLM unless it does, cause you can easily check the sources it used and see if the slop it spit out has even a grain of truth to it. With Gemini, there's no easy way to verify anything it said beyond just doing the googling yourself, and that defeats the point.
Gemini gets constantly glazed by the AI enthusiasts community because it often passes benchmarks very well when it is literally one of the worst ones to use.