this post was submitted on 21 Nov 2025
358 points (96.6% liked)

Technology

76945 readers
3721 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] pelespirit@sh.itjust.works 156 points 1 day ago (2 children)

It says it will finish the code, it doesn't say the code will work.

[–] Thorry@feddit.org 67 points 1 day ago (5 children)

Also just because the code works, doesn't mean it's good code.

I've had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.

What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.

So I asked the dev, he said he didn't fully understand what the LLM did, he wasn't familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.

Also the unit tests were wrong, I explained to the dev that usually with humans it's a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don't have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren't complete nonsense to begin with.

I swear to the gods, LLMs don't save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don't look at that. And the quality suffers a lot, without anyone really noticing.

[–] airgapped@piefed.social 14 points 22 hours ago

Great description of a problem I noticed with most LLM generated code of any decent complexity. It will look fantastic at first but you will be truly up shit creek by the time you realise it didn't generate a paddle.

[–] Kissaki@feddit.org 4 points 16 hours ago

So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection.

Big baffling facepalm moment.

If they would at least prefix the changeset description with that it'd be easier to interpret and assess.

load more comments (3 replies)
[–] TORFdot0@lemmy.world 10 points 1 day ago (1 children)

I was going to say. The code won’t compile but it will be “finished “

[–] WaitThisIsntReddit@lemmy.world 7 points 1 day ago (1 children)

A couple agent iterations will compile. Definitely won't do what you wanted though, and if it does it will be the dumbest way possible.

[–] TORFdot0@lemmy.world 7 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience

Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol

load more comments (1 replies)
[–] floofloof@lemmy.ca 58 points 1 day ago* (last edited 1 day ago) (1 children)

Ooh, unemployment! How exciting! I love Microsoft now.

[–] BedSharkPal@lemmy.ca 46 points 1 day ago (4 children)

Seriously who the hell are they trying to sell this to?

Are they just that desperate to keep the hype train going?

[–] TachyonTele@piefed.social 51 points 1 day ago (2 children)

Business owners. People that don't want to spend money on annoying stuff like wages.

load more comments (2 replies)

The circlejerk of tech bros and busidiots who haven't built a damn thing in their lives.

load more comments (2 replies)
[–] WhatGodIsMadeOf@feddit.org 52 points 1 day ago (2 children)

Copilot, turn on the gas stove without the pilot. Copilot, in 3 hours light the pilot.

[–] errer@lemmy.world 24 points 1 day ago (1 children)

My Windows automatically read these instructions from my screen and I died!

load more comments (1 replies)
[–] apfelwoiSchoppen@lemmy.world 4 points 1 day ago

Username checks out

[–] thejml@sh.itjust.works 48 points 1 day ago (3 children)

Copilot keeps finishing my code for me in near real time... it completely disrupts my train of thought and my productivity dropped tremendously. I finally disabled it.

I LIKE writing code, stop trying to take the stuff away that I WANT to do and instead take away the stuff I HATE doing.

[–] lauha@lemmy.world 22 points 1 day ago (1 children)

What I don't want AI to do:

  • write code for me
  • write fixes for me

What I want it to do:

  • find bugs and tell me about them (but still don't fix them)
load more comments (1 replies)
[–] Serinus@lemmy.world 6 points 1 day ago (3 children)

Yeah, I just wrote a blog post comment about how I enjoy using Copilot. But that's when I explicitly ask it a question or give it a task. The auto complete is wrong more often than it's right.

Probably doesn't help that if it was tedious, boilerplate code I would have already explicitly asked it.

load more comments (3 replies)
load more comments (1 replies)
[–] kreskin@lemmy.world 40 points 1 day ago (1 children)

yes but all the code will be wrong and you will spend your entire day chasing stupid mistakes and hallucinations in the code. I'd rather just write the code myself thanks.

[–] slampisko@lemmy.world 14 points 1 day ago

Yeah! I can make my own stupid mistakes and hallucinations, thank you very much!

[–] Treczoks@lemmy.world 37 points 20 hours ago

What they forget to mention is that you then spend the rest of the week to fix the bugs it introduced and to explain why your code deleted the production database...

[–] garretble@lemmy.world 32 points 1 day ago (15 children)

I had a bit of a breakthrough with some personal growth with my code today.

I learned a bit more about entity framework that my company is using for a project, and was able to create a database table, query it, add/delete/update, normal CRUD stuff.

I normally work mostly on front end code, so it was rewarding to learn a new skill and see the data all the way from the database to the UI and back - all my code. I felt great after doing a code review this afternoon to make sure I wasn’t missing anything, and we talked about some refactoring to make it better.

AI will never give you that.

load more comments (15 replies)
[–] CosmoNova@lemmy.world 28 points 1 day ago

I was finished with Windows before Microshit finished Copilot.

[–] llama@lemmy.zip 16 points 5 hours ago

Actually it won't be finishing anything because code is disposable now and nobody cares what trivial app somebody can churn out

[–] FlashMobOfOne@lemmy.world 16 points 19 hours ago

Love how they're pretending that an LLM is useful for any task that needs precision.

[–] MadMadBunny@lemmy.ca 14 points 1 day ago* (last edited 1 day ago) (13 children)

But, will it work, huh? HUH?

I can also type a bunch of random sentences of words. Doesn’t make it more understandable.

[–] andallthat@lemmy.world 4 points 22 hours ago (1 children)

but can YOU do it before I finish my coffee?

load more comments (1 replies)
load more comments (12 replies)
[–] kyonshi@piefed.social 13 points 15 hours ago

Because you won't have time to drink that coffee if you put this code into production

[–] YesButActuallyMaybe@lemmy.ca 13 points 17 hours ago (2 children)

Ah get outta here! Next time they’ll say that co pilot also chooses my furry porn and controls my buttplug while it codes for me.

[–] Prior_Industry@lemmy.world 12 points 14 hours ago (1 children)

I mean it gets there in the end but it's often three of four prompts before it provides working code for a relatively simple powershell script. Can't imagine that it scales to complex code that well at the moment, but then again I'm not a coder.

load more comments (1 replies)
[–] lightnegative@lemmy.world 11 points 8 hours ago (1 children)

Writing code is the reward for doing the thinking. If the LLM does it then software engineering is no fun.

It's like painting - once you've finally finished the prep, which is 90% of the effort, actually getting to paint is the reward

load more comments (1 replies)
[–] dreadbeef@lemmy.dbzer0.com 9 points 22 hours ago

My problem is that the dev and stage environments are giving me 502 gateway errors when hitting only certain api endpoints from the app gateway. My real problem is devops aren't answering my support tickets and telling me which terraform var file I gotta muck with and tell me what to fix on it. I'm sure you'll be fixed soon though right copilot?

[–] ABetterTomorrow@sh.itjust.works 9 points 21 hours ago (2 children)

By the headline statement, that it should be complete and works 100%. Big doubt.

[–] jj4211@lemmy.world 5 points 21 hours ago* (last edited 15 hours ago)

No, just complete. Whatever the dude does may have nothing to do with what you needed it to do, but it will be "done"

load more comments (1 replies)
[–] popekingjoe@lemmy.world 9 points 1 day ago

But... But I don't want it to. 😮‍💨

[–] melsaskca@lemmy.ca 8 points 21 hours ago

I would rather paint a portrait by myself, spending the time to do it, rather than asking some computer prompt to spit me out a picture. Same logic applies with coding for me.

[–] Siegfried@lemmy.world 7 points 20 hours ago

If thats what they are aiming at, I feel like their AI is actually suppose to be the pilot and the user the copilot

[–] jjlinux@lemmy.zip 7 points 6 hours ago (2 children)

These fuckers at MicroShit have lost all the ability needed to read a room.

load more comments (2 replies)
[–] WhatGodIsMadeOf@feddit.org 7 points 5 hours ago (1 children)

Does its ai learn from people using vscode?

load more comments (1 replies)
[–] melfie@lemy.lol 7 points 16 hours ago

A more appropriate line would be that Copilot can shit out code faster than you can pinch off your own loaf.

[–] DupaCycki@lemmy.world 4 points 19 hours ago

Technically true, but nobody said the code will be at all functional. I'm pretty sure I can finish about 800000 coffees before Copilot generates anything usable that is longer than 3 lines.

[–] _stranger_@lemmy.world 4 points 16 hours ago* (last edited 16 hours ago)

I can drink coffee pretty slow, but I don't think I can drink it that slow

load more comments
view more: next ›