this post was submitted on 23 Dec 2025
609 points (97.7% liked)

Technology

77899 readers
2395 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] UnderpantsWeevil@lemmy.world 55 points 21 hours ago (1 children)

A computer is a machine that makes human errors at the speed of electricity.

[–] MountingSuspicion@reddthat.com 27 points 20 hours ago (2 children)

I think one of the big issues is it often makes nonhuman errors. Sometimes I forget a semicolon or there's a typo, but I'm well equipped to handle that. In fact, most programs can actually catch that kind of issue already. AI is more likely to generate code that's hard to follow and therefore harder to check. It makes debugging more difficult.

[–] UnderpantsWeevil@lemmy.world 2 points 19 hours ago

AI is more likely to generate code that’s hard to follow and therefore harder to check.

Sure. It's making the errors faster and at a far higher volume than any team of humans could do in twice the time. The technology behind inference is literally an iterative process of turning gibberish into something that resembles human text. So its sort of a speed run from baby babble into college level software design by trial, evaluation, and correction over and over and over again.

But because the baseline comparison code is, itself, full of errors, the estimation you get at the end of the process is going to be scattering errant semicolons (and far more esoteric coding errors) through the body of the program at a frequency equivalent to humans making similar errors over a much longer timeline.

[–] 5too@lemmy.world 2 points 13 hours ago

Also seems like it'd be a lot harder to modify or extend later