this post was submitted on 29 Oct 2025
1599 points (99.7% liked)

Programmer Humor

27117 readers
2460 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 

Apparently a page from an internal IBM training manual. Some further attempts at source it

you are viewing a single comment's thread
view the rest of the comments
[–] onnekas@sopuli.xyz 4 points 12 hours ago* (last edited 12 hours ago) (2 children)

I generally agree.

Imagine however, that a machine objectively makes the better decisions than any person. Should we then still trust the humans decision just to have someone who is accountable?

What is the worth of having someone who is accountable anyway? Isn't accountability just an incentive for humans to not just fuck things up? It's also nice for pointing fingers if things go bad - but is there actually any value in that?

Additionally: there is always a person who either made the machine or deployed the machine. IMO the people who deploy a machine and decide that this machine will now be making decisions should be accountable for those actions.

[–] Maroon@lemmy.world 6 points 12 hours ago (2 children)

Imagine however, that a machine

That's hypothetical. In the real world, in the human society, the humans who are part of corporations and receiving profits by making/selling these computers must also bear the responsibility.

[–] onnekas@sopuli.xyz 2 points 11 hours ago

I believe those who deploy the machines should be responsible in the first place. The corporations who make/sell those machines should be accountable if they deceptively and intentionally program those machines to act maliciously or in somebody else's interest.

[–] calcopiritus@lemmy.world 2 points 10 hours ago (1 children)

Tbf that leads to the problem of:

Company/Individual makes program that is in no way meant for making management decision.

Someone else comes and deploys that program to make management decisions.

The ones that made that program couldn't stop the ones that deployed it from deploying it.

Even if the maker aimed to make a decision-making program, and marketed it as so. Whoever deployed it is ultimately the responsible for it. As long as the maker doesn't fake tests or certifications of course, I'm sure that would violate many laws.

[–] ZombiFrancis@sh.itjust.works 2 points 9 hours ago

The premise is that a computer must never make a management decision. Making a program capable of management decisons already failed. The deployment and use of that program to that end is already built upon that failure.

Imagine however, that a machine objectively makes the better decisions than any person.

You can't know if a decision is good or bad without a person to evaluate it. The situation you're describing isn't possible.

the people who deploy a machine [...] should be accountable for those actions.

How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?