this post was submitted on 21 Nov 2025
356 points (96.6% liked)
Technology
76945 readers
3721 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But, will it work, huh? HUH?
I can also type a bunch of random sentences of words. Doesn’t make it more understandable.
but can YOU do it before I finish my coffee?
Some models are getting so good they can patch user reported software defects following test driven development with minimal or no changes required in review. Specifically Claude Sonnet and Gemini
So the claims are at least legit in some cases
Oh good. They can show us how it's done by patching open-source projects for example. Right? That way we will see that they are not full of shit.
Where are the patches? They have trained on millions of open-source projects after all. It should be easy. Show us.
That's an interesting point, and leads to a reasonable argument that if an AI is trained on a given open source codebase, developers should have free access to use that AI to improve said codebase. I wonder whether future license models might include such clauses.
Free access to the model is one thing but free access to compute for it is another
Are you going to spend your tokens on open source projects? Show us how generous you are.
I'm not the one trying to prove anything, and I think it's all bullshit. I'm waiting for your proof though. Even with a free open-source black box.
What's a free open source black box?
Some kind of bad joke that went way over your head. Where are your merge requests?
At work, the software is not open source.
I would use it for contributions to open source projects but I do not pay for any AI subscriptions, and I can't use my employee account for copilot enterprise for non-work projects.
Every week for the last year or so I have been testing various copilot models against customer reported software defects and it's seriously at a point now where with a single prompt Gemini pro 2.5 is solving the entire defect with unit tests. Some need no changes in review and are good to go.
As an open source maintainer of a large project I have noticed a huge uptick in PRs which has created a larger review workload, I'm almost certain these are due to LLMs. Quality of a typical PR has not decreased since LLMs have become available and I am thus far very glad
If I were to speculate I'd guess the huge increase in context windows has made the tools viable, models like GPT5 are garbage on any sizable code bases
Will it execute . . . probably.
Will it execute what you want it to . . . probably not.
You really have no idea, I'm working with these tools in industry and with test driven development and human review these are .performing better than human developers
YMMV.