this post was submitted on 14 Aug 2025
216 points (100.0% liked)

Technology

73972 readers
3698 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kescusay@lemmy.world 21 points 3 hours ago (4 children)

I have to test it with Copilot for work. So far, in my experience its "enhanced capabilities" mostly involve doing things I didn't ask it to do extremely quickly. For example, it massively fucked up the CSS in an experimental project when I instructed it to extract a React element into its own file.

That's literally all I wanted it to do, yet it took it upon itself to make all sorts of changes to styling for the entire application. I ended up reverting all of its changes and extracting the element myself.

Suffice to say, I will not be recommending GPT 5 going forward.

[–] GenChadT@programming.dev 10 points 3 hours ago (1 children)

That's my problem with "AI" in general. It's seemingly impossible to "engineer" a complete piece of software when using LLMs in any capacity that isn't editing a line or two inside singular functions. Too many times I've asked GPT/Gemini to make a small change to a file and had to revert the request because it'd take it upon itself to re-engineer the architecture of my entire application.

[–] hisao@ani.social 3 points 1 hour ago (1 children)

I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it's needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.

[–] FauxLiving@lemmy.world 2 points 59 minutes ago

It's a lot of people not understanding the kinds of things it can do vs the things it can't do.

It was like when people tried to search early Google by typing plain language queries ("What is the best restaurant in town?") and getting bad results. The search engine had limited capabilities and understanding language wasn't one of them.

If you ask a LLM to write a function to print the sum of two numbers, it can do that with a high success rate. If you ask it to create a new operating system, it will produce hilariously bad results.

[–] Sanguine@lemmy.dbzer0.com 10 points 2 hours ago (1 children)

Sounds like you forgot to instruct it to do a good job.

[–] Dindonmasker@sh.itjust.works 2 points 33 minutes ago

"If you do anything else then what i asked your mother dies"

[–] Squizzy@lemmy.world 8 points 2 hours ago

We moved to m365 and were encouraged to try new elements. I gave copilot an excel sheet, told it to add 5% to each percent in column B and not to go over 100%. It spat out jumbled up data all reading 6000%.

[–] Vanilla_PuddinFudge@infosec.pub 2 points 45 minutes ago

Ai assumes too fucking much. I'd used it to set up a new 3D printer with klipper to save some searching.

Half the shit it pulled down was Marlin-oriented then it had the gall to blame the config it gave me for it like I wrote it.

"motherfucker, listen here..."