this post was submitted on 13 Jun 2025
864 points (99.3% liked)

Technology

71415 readers
2798 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pyre@lemmy.world 18 points 20 hours ago (1 children)

from what I've seen so far i think i can safely the only thing AI can truly replace is CEOs.

[–] r0ertel@lemmy.world 3 points 16 hours ago (1 children)

I was thinking about this the other day and don't think it would happen any time soon. The people who put the CEO in charge (usually the board members) want someone who will make decisions (that the board has a say in) but also someone to hold accountable for when those decisions don't realize profits.

AI is unaccountable in any real sense of the word.

[–] pajam@lemmy.world 3 points 1 hour ago (1 children)

AI is unaccountable in any real sense of the word.

Doesn't stop companies from trying to deflect accountability onto AI. Citations Needed recently did an episode all about this: https://citationsneeded.medium.com/episode-217-a-i-mysticism-as-responsibility-evasion-pr-tactic-7bd7f56eeaaa

[–] r0ertel@lemmy.world 1 points 1 hour ago

I suppose that makes perfect sense. A corporation is an accountability sink for owners, board members and executives, so why not also make AI accountable?

I was thinking more along the lines of the "human in the loop" model for AI where one human is responsible for all the stuff that AI gets wrong despite it physically not being possible to review every line of code an AI produces.