this post was submitted on 15 Apr 2025
1445 points (95.9% liked)
memes
15345 readers
2860 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month4. No bots
No bots without the express approval of the mods or the admins5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If that email needs to go to a client or stakeholder, then our culture won't accept just the prompt.
Where it really shines is translation, transcription and coding.
Programmers can easily double their productivity and increase the quality of their code, tests and documentation while reducing bugs.
Translation is basically perfect. Human translators aren't needed. At most they can review, but it's basically errorless, so they won't really change the outcome.
Transcribing meetings also works very well. No typos or grammar errors, only sometimes issues with acronyms and technical terms, but those are easy to spot and correct.
As a programmer, there are so very few situations where I've seen LLMs suggest reasonable code. There are some that are good at it in some very limited situations but for the most part they're just as bad at writing code as they are at everything else.
I think the main gain is in automation scripts for people with little coding experience. They don't need perfect or efficient code, they just need something barely functioning which is something that LLMs can generate. It doesn't always work, but most of the time it works well enough
Not really. As a programmer who doesn't deal with math like at all, just working on overly-complicated CRUD's, and even for me the AI is still completely wrong and/or waste of time 9 times out of 10. And I can usually spot when my colleagues are trying to use LLM's because they submit overly descriptive yet completely fucking pointless refactors in their PR's.