this post was submitted on 18 Jul 2025
278 points (90.4% liked)
memes
16306 readers
2738 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I would think that, since it's been recognised that these messages are costing a lot of energy (== money) to process, companies would at some point add a simple <if input == "thanks"> type filter to catch a solid portion of them. Why haven't they?
Because the only progress we know how to make on computers is backwards it seems.
Generative AI is supposed to be destructive. It's a feature.
It won't be as simple as that and the engineers who work on these systems can only think in terms of LLM and text classification, so they'd run your message through a classifier and end the conversation if it returns a "goodbye or thanks" score above 0.8, saving exactly 0 compute power.
I mean, even if we resort to using a neural network for checking "is the conversation finished?" That hyper-specialised NN would likely be orders of magnitude cheaper to run than your standard LLM, so you could likely save quite a bit of power/money by using it to filter for the actual LLM, no?