this post was submitted on 16 Sep 2025
128 points (100.0% liked)

Ask Lemmy

34549 readers
1595 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

top 50 comments
sorted by: hot top controversial new old
[–] CMDR_Horn@lemmy.world 30 points 11 hours ago (2 children)

Several years ago I read an article that went in to great detail on how LLMs are perfectly poised to replace C-levels in corporations. I went on to talk about how they by nature of design essentially do the that exact thing off the bat, take large amounts of data and make strategic decisions based on that data.

I wish I could find it to back this up, but regardless ever since then, I've been waiting for this watershed moment to hit across the board...

[–] Soleos@lemmy.world 16 points 11 hours ago (6 children)

They... don't make strategic decisions... That's part of why we hate them no? And we lambast AI proponents because they pretend they do.

[–] turdas@suppo.fi 32 points 10 hours ago (4 children)

The funny part is that I can't tell whether you're talking about LLMs or the C-suite.

load more comments (4 replies)
[–] turkalino@lemmy.yachts 4 points 10 hours ago (1 children)

They do indeed make strategic decisions, just only in favor of the short term profits of shareholders. It’s “strategy” that a 6 yr old could execute, but strategy nonetheless

[–] Soleos@lemmy.world 1 points 6 hours ago

This is closer to what I mean by strategy and decisions: https://matthewdwhite.medium.com/i-think-therefore-i-am-no-llms-cannot-reason-a89e9b00754f

LLMs can be helpful for informing strategy, and simulating strings of words that may can be perceived as a strategic choice, but it doesn't have it's own goal-oriented vision.

[–] OboTheHobo@ttrpg.network 3 points 8 hours ago

I'd argue they do make strategic decisions, its just that the strategy is always increasing quarterly earnings and their own assets.

[–] Yezzey@lemmy.ca 2 points 10 hours ago
[–] MagicShel@lemmy.zip 2 points 10 hours ago

You're right. But then look at Musk. if anyone was ripe for replacement with AI, it's him.

load more comments (1 replies)
[–] Yezzey@lemmy.ca 2 points 11 hours ago

Its inevitable.

[–] fadingembers@lemmy.blahaj.zone 22 points 8 hours ago (2 children)

Y'all are all missing the real answer. CEOs have class solidarity with shareholders. Think about about how they all reacted to the death of the United health care CEO. They'll never get rid of them because they're one of them. Rich people all have a keen awareness of class consciousness and have great loyalty to one another.

Us? We're expendable. They want to replace us with machines that can't ask for anything and don't have rights. But they'll never get rid of one of their own. Think about how few CEOs get fired no matter how poor of a job they do.

P.S. Their high pay being because of risk is a myth. Ever heard of a thing called the golden parachute? CEOs never pay for their failures. In fact when they run a company into the ground, they're usually the ones that receive the biggest payouts. Not the employees.

[–] Yezzey@lemmy.ca 4 points 8 hours ago (1 children)

Loyalty lasts right up until the math says otherwise.

[–] fadingembers@lemmy.blahaj.zone 1 points 6 hours ago

The math has never made sense for CEOs

[–] blarghly@lemmy.world 1 points 4 hours ago

Wouldn't they just remove the CEO from their role and they would just become another rich shareholder?

[–] jordanlund@lemmy.world 8 points 10 hours ago

Should be way easier to replace a CEO. No need for a golden parachute, if the AI fails, you just turn it off.

But I'd imagine right now you have CEOs being paid millions and using an AI themselves. Worst of both worlds.

[–] Bongles@lemmy.zip 7 points 9 hours ago (1 children)

AI? Yes probably. Current AI? No. I do think we'll see it happen with an LLM and that company will probably flop. Shit how do you even prompt for that.

[–] Yezzey@lemmy.ca 1 points 9 hours ago (1 children)

It'll take a few years but it progresses exponentially, it will get there.

[–] NoneOfUrBusiness@fedia.io 2 points 9 hours ago (2 children)

It progresses logistically; eventually it'll plateau and there's no reason to believe that plateau will come after "can do everything a human can.". See: https://www.promptlayer.com/research-papers/have-llms-hit-their-limit

[–] FaceDeer@fedia.io 1 points 8 hours ago (1 children)

Sure, but we don't know where that plateau will come and until we get close to it progress looks approximately exponential.

We do know that it's possible for AI to reach at least human levels of capability, because we have an existence proof (humans themselves). Whether stuff based off of LLMs will get there without some sort of additional new revolutionary components, we can't tell yet. We won't know until we actually hit that plateau.

[–] magiccupcake@lemmy.world 3 points 7 hours ago (1 children)

Current Ai has no shot of being as smart as humans, it's simply not sophisticated enough.

And that's not to say that current llms aren't impressive, they are, but the human brain is just on a whole different level.

And just to think about on a base level, LLM inference can run off a few gpus, roughly order of 100 billion transistors. That's roughly on par with the number of neurons, but each neuron has an average of 10,000 connections, that are capable of or rewiring themselves to new neurons.

And there are so many distinct types of neurons, with over 10,000 unique proteins.

On top of there over a hundred neurotransmitters, and we're not even sure we've identified them all.

And all of that is still connected to a system that integrates all of our senses, while current AI is pure text, with separate parts bolted onto it for other things.

[–] FaceDeer@fedia.io 1 points 7 hours ago

The human brain is doing a lot of stuff that's completely unrelated to "being intelligent." It's running a big messy body, it's supporting its own biological activity, it's running immune system operations for itself, and so forth. You can't directly compare their complexity like this.

It turns out that some of the thinky things that humans did with their brains that we assumed were hugely complicated could be replicated on a commodity GPU with just a couple of gigabytes of memory. I don't think it's safe to assume that everything else we do is as complicated as we thought either.

[–] Yezzey@lemmy.ca 1 points 9 hours ago

Ive had too many beers to read that.

[–] flandish@lemmy.world 6 points 11 hours ago (2 children)

in all dialectical seriousness, if it appeases the capitalists, it will happen. “first they came with ai for the help desk…” kind of logic here. some sort of confluence of Idiocracy and The Matrix will be the outcome.

[–] Ging@anarchist.nexus 2 points 10 hours ago

You mean dialectical whimsiness

[–] Yezzey@lemmy.ca 2 points 11 hours ago* (last edited 10 hours ago)

Love that term dialectical seriousness, have to admit i had to look it up :)

[–] ArgumentativeMonotheist@lemmy.world 5 points 9 hours ago (1 children)

No, because someone has to be the company's scapegoat... but if the ridiculous post-truth tendencies of some societies increase, then maybe "AI" will indeed gain "personhood", and in that case, maybe?

[–] Yezzey@lemmy.ca 3 points 9 hours ago

I don't see any other future.

[–] AmidFuror@fedia.io 4 points 1 hour ago

Companies never outsourced the CEO position to countries which traditionally have lower CRO salaries but plenty of competency (e.g. Japan), so they won't do this either. It's because CEOs are controlled by boards, and the boards are made up of CEOs from other companies. They have a vested interest in human CEOs with inflated salaries.

[–] Dave@lemmy.nz 3 points 11 hours ago (2 children)

From what people on Lemmy say, a CEO (and board) isn't there to do a good job they are there to be a fall guy if something goes wrong, protecting shareholders from prosecution. Can AI do that?

[–] Witchfire@lemmy.world 5 points 11 hours ago (1 children)

It can do so even better than a human. They would just announce a patch for it

[–] Dave@lemmy.nz 1 points 9 hours ago (1 children)

That's brilliant! So long as the AI company has a board to take the fall for any big AI mistakes.

[–] Yezzey@lemmy.ca 2 points 8 hours ago

AI will assess all risks and make a bet, if it fails it will have a fund available to compensate the losses.

[–] Yezzey@lemmy.ca 4 points 11 hours ago (1 children)

I guess in theory there would be no need for a fall guy as AI would cover all angles.

[–] Dave@lemmy.nz 1 points 9 hours ago (1 children)

But the fall guy is for things they know they shouldn't do. They aren't trying to only do the things they should.

[–] Yezzey@lemmy.ca 2 points 9 hours ago

Evil companies will have evil AI

[–] LadyMeow@lemmy.blahaj.zone 3 points 11 hours ago (1 children)

Isn’t this sorta paradoxical? Like either ceos are actually worth what insane money they make, or a palm pilot could replace them, but somehow they are paid ridiculous amounts for…. What?

[–] Soleos@lemmy.world 3 points 10 hours ago (1 children)

No, it's not paradoxical. You are conflating time points.

I won't debate the "value" of CEOs, but in this system, their value is subject to market conditions like any other. Human computers were valued much more before electrical computers were created. Aluminum was worth more than gold before a fast and cheap extraction process was invented.

You could not replace a CEO with a Palm pilot 10 years ago.

[–] LadyMeow@lemmy.blahaj.zone 1 points 10 hours ago (1 children)

I guess I was being a bit over the top, the CEOs are the capitalists. I guess it’s possible they are doing their job with LLMs now, but just behind the scenes. Like, either they are worth what they are paid, or the system is broken AF and it doesn’t matter.

I just don’t see them being replaced in any meaningful way.

[–] flandish@lemmy.world 3 points 10 hours ago (1 children)

CEOs may not be the capitalists at the top of a particular food chain. The shareholding board is, for instance. They can be both but there are plenty of CEO level folks who could, with a properly convinced board, be replaced all nimbly bimbly and such.

[–] LadyMeow@lemmy.blahaj.zone 1 points 9 hours ago (1 children)

I guess, but they sure shovel plenty of money at say… Musk. So what? Is he worth a trillion? It seems the boards could trim a ton of money if ceos did nothing. Or they do lots and it’s all worth it. Who’s to say.

I just don’t see LLMs as the vehicle to unseat CEOs, or maybe I’m small minded idk.

Musk is a shareholder. He own large parts of the companies he's the CEO of

[–] blarghly@lemmy.world 3 points 4 hours ago

If AI ends up running companies better than people, won’t shareholders demand the switch?

Yes. It might be unorthodox at first, but they could just take a vote, and poof, done.

And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

Wat?

No. What?

So you just used circular logic to make the AI a "person"... maybe you're saying once it is running the corporation, it is the corporation? But no.

Anyway, corporations are "considered people" in the US under the logic that corporations are, at the end of the day, just collections of people. So you can, say, go to a town hall to voice your opinion as an individual. And you can gather up all your friends to come with you, and form a bloc which advocates for change. You might gain a few more friends, and give your group a name, like "The Otter Defence League." In all these scenarios, you and others are using your right to free speech as a collective unit. Citizens United just says that this logic also applies to corporations.

That means CEOs would eventually have to replace themselve

CEOs wouldn't have to "replace themselves" any more than you have to find a replacement if your manager fires you from Dairy Queen.

[–] ArbitraryValue@sh.itjust.works 2 points 11 hours ago* (last edited 11 hours ago) (1 children)

You're mixing up corporate personhood and the CEO's own personhood. He isn't the corporation. Ultimately, he's just an employee. There's no good reason for the board of directors to pay him if a machine can do a better job while costing less. I'm not sure why you might think that wouldn't happen.

[–] Yezzey@lemmy.ca 2 points 11 hours ago (1 children)

No but the corporation is the person, CEO handing it to AI which then becomes a person.

[–] ArbitraryValue@sh.itjust.works 7 points 10 hours ago* (last edited 10 hours ago) (1 children)

You might want to read more about corporate personhood. It doesn't mean that the corporation is considered by the law to be a person, or that whoever or whatever performs the duties of the CEO is by definition a person. It means that a corporation, despite not being a person, has certain rights usually associated with people. For example, a person can own property or be sued. A cat cannot own property or be sued. A corporation is like a person rather than a cat in that it can also own property or be sued. There's debate about exactly which rights should be granted to corporations, but the idea that a corporation has at least some minimal set of rights is centuries old and an essential part of the very definition of what a corporation is.

[–] Yezzey@lemmy.ca 3 points 10 hours ago

True but corporate personhood already gives the legal shell. If an AI is actually running the company’s decisions, wouldn’t that be the first time in practice that courts are forced to treat an AI’s choices as the will of a legal person? In effect, wouldn’t that be the first step toward AI being judged as a ‘person’ under law?

[–] motor_spirit@lemmy.world 2 points 11 hours ago

It's what republicans need, yes

[–] normalexit@lemmy.world 2 points 39 minutes ago

I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

Right now the technology doesn't seem there yet, but it has been rapidly improving, so we'll see.

I could definitely see rich CEOs funding the creation of a "celebrity" bot that answers questions the way they do. Maybe with their likeness and voice, so they can keep running companies from beyond the grave. Throw it in one of those humanoid robots and they can keep preaching the company mission until the sun burns out.

What a nightmare.

load more comments
view more: next ›