this post was submitted on 26 Oct 2025
358 points (90.5% liked)

Technology

75756 readers
7176 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

top 50 comments
sorted by: hot top controversial new old
[–] supersquirrel@sopuli.xyz 119 points 5 days ago (1 children)

Starting this conversation with Sam Altman is like showing up at a funeral in a clowncar

[–] Darkcoffee@sh.itjust.works 34 points 5 days ago (1 children)

Or showing up at a strip club with communion wafers

[–] supersquirrel@sopuli.xyz 8 points 5 days ago* (last edited 5 days ago) (1 children)

Or both, not a singularity, but a duality

[–] Darkcoffee@sh.itjust.works 8 points 5 days ago

So long as we're not engaging with someone quoting Altman, I'm good with anything.

[–] 6nk06@sh.itjust.works 98 points 5 days ago (9 children)

At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

No and no. Have you ever coded anything?

[–] kescusay@lemmy.world 31 points 5 days ago (3 children)

Yeah, I have never spent "days" setting anything up. Anyone who can't do it without spending "days" struggling with it is not reading the documentation.

[–] HarkMahlberg@kbin.earth 52 points 5 days ago (1 children)

Ever work in an enterprise environment? Sometimes a single talented developer cannot overcome the calcification of hundreds of people over several decades who care more about the optics of work than actual work. Documentation cannot help if its non-existent/20 years old. Documentation cannot make teams that don't believe in automation, adopt Docker.

Not that I expect Sam Altman to understand what it's like working in a dumpster fire company, the only job he's ever held is to pour gasoline.

[–] killeronthecorner@lemmy.world 7 points 5 days ago* (last edited 5 days ago)

Dumpster fire companies are the ones he's targeting because they're the mostly likely to look for quick and cheap ways to fix the symptoms of their problems, and most likely to want to replace their employees with automations.

[–] galaxy_nova@lemmy.world 12 points 5 days ago (3 children)

You guys are getting documentation?

load more comments (3 replies)
[–] vacuumflower@lemmy.sdf.org 8 points 5 days ago

Sometimes documentation is inconsistent.

load more comments (8 replies)
[–] Telorand@reddthat.com 58 points 5 days ago

Cool, know what job could easily be wiped out? Management. Sam Altman is a manager.

Therefore, Sam Altman doesn't do real work. Fuck you, asshole.

[–] mp3@lemmy.ca 44 points 5 days ago* (last edited 5 days ago) (1 children)

CEO isn't an actual job either, it's just the 21st century's titre de noblesse.

load more comments (1 replies)
[–] Dojan@pawb.social 40 points 5 days ago (7 children)

At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

I'd not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we're talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don't really set up new projects all that often.

[–] Passerby6497@lemmy.world 9 points 5 days ago (1 children)

Fuck, I barely let AI make functions in my code because half the time the fuckin idiot can't even guess the correct method name and parameters when it can pull up the goddamned help page like I can or even Google the basic syntax.

load more comments (1 replies)
load more comments (6 replies)
[–] LodeMike@lemmy.today 39 points 5 days ago (1 children)

Says the guy who hadn't worked a day in their life

[–] nucleative@lemmy.world 24 points 5 days ago (2 children)

From the article:

“The thing about that farmer,” Altman said, is not only that they wouldn’t believe you, but “they very likely would look at what you do and I do and say, ‘that’s not real work.'”

I think he pretty much agrees with you.

load more comments (2 replies)
[–] Curious_Canid@lemmy.ca 28 points 5 days ago (1 children)

Sam Altman is a huckster, not a technologist. As such, I don't really care what he says about technology. His purpose has always been to transfer as much money as possible from investors into his own pocket before the bubble bursts. Anything else is incidental.

I am not entirely writing off LLMs, but very little of the discussion about them has been rational. They do some things fairly well and a lot of things quite poorly. It would be nice if we could just focus on the former.

load more comments (1 replies)
[–] MonkderVierte@lemmy.zip 27 points 5 days ago* (last edited 5 days ago) (17 children)

Talking psychology, please stop calling it AI. This raises unrealistic expectations. They are Large Language Models.

[–] FireWire400@lemmy.world 20 points 5 days ago

Raising unrealistic expectations is what companies like OpenAI are all about

load more comments (16 replies)
[–] SapphironZA@sh.itjust.works 27 points 4 days ago (2 children)

Executive positions are probably the easiest to replace with AI.

  1. AI will listen to the employees
  2. They will try to be helpful by providing context and perspective based on information the employee might not have.
  3. They will accept being told they are wrong and update their advice.
  4. They will leave the employee to get the job done, trusting that the employee will get back to them if they need more help.
load more comments (2 replies)
[–] SocialMediaRefugee@lemmy.world 23 points 4 days ago (1 children)

What do we need the mega rich for anyway? They aren't creative and easily replaced with AI at this point.

load more comments (1 replies)
[–] TheFogan@programming.dev 23 points 5 days ago (1 children)

You know what, he actually wouldn't be horrificly wrong if he were actually pushing for something there. Lets say hypothetically our jobs, aren't real work, and it's no big deal that they are replaced.... the actual intents of progression of technology... was originally that when the ratio of work needed to be done and people shifts... we'd work less for more pay etc... but no we just capitalism it and say "labor is in high supply, so we need to cut it's price until people can find use for it".

[–] ZoteTheMighty@lemmy.zip 11 points 5 days ago (1 children)

I feel like he's really onto something about real work, but he's missing the point of society. The purpose of our economy is to employ everyone, thus minimizing the negative societal effects of supporting unemployed people, and enabling people to improve their lives. If you optimize a society to produce more GDP by firing people, you're subtracting value, not adding it.

[–] squaresinger@lemmy.world 10 points 5 days ago

I think you are a step further down in the a/b problem tree.

The purpose of society is that everyone can have a safe, stable and good life. In our current setup this requires that most people are employed. But that's not a given.

Think of a hypothetical society where AI/robots do all the work. There would be no need to employ everyone to do work to support unemployed people.

We are slowly getting to that direction, but the problem here is that our capitalist society isn't fit for that setup. In our capitalist setup, removing the need for work means making people unemployed, who then "need to be supported" while the rich who own/employ robots/AI benefit without putting in any work at all.

[–] billwashere@lemmy.world 20 points 4 days ago

Sam, I say this will all my heart…

Fuck you very kindly. I’m pretty sure what you do is not “a real job” and should be replaced by AI.

[–] cupcakezealot@piefed.blahaj.zone 18 points 5 days ago (1 children)

says the guy who never did real work in his life

load more comments (1 replies)
[–] sobchak@programming.dev 16 points 4 days ago (4 children)

The problem is the capitalist investor class, by and large, determines what work will be done, what kinds of jobs there will be, and who will work those jobs. They are becoming increasingly out of touch with reality as their wealth and power grows and seem to be trying to mold the world into something, somewhere along the lines of what Curtis Yarven advocates for, that most people would consider very dystopian.

This discussion is also ignoring the fact that currently, 95% of AI projects fail, and studies show that LLM use hurts the productivity of programmers. But yeah, there will almost surely be breakthroughs in the future that will produce more useful AI tech; nobody knows what the timeline for that is though.

[–] Tollana1234567@lemmy.today 9 points 4 days ago

its also hurting students currently HS and college too, they are learning less than before.

load more comments (3 replies)

If OpenAI gets wiped out, maybe it wasn’t even a “real company” to start with

[–] SocialMediaRefugee@lemmy.world 15 points 4 days ago

Can't AI replace Sam Altman?

[–] FauxPseudo@lemmy.world 14 points 5 days ago

He doesn't know Jobs was wiped out by cancer?

[–] DupaCycki@lemmy.world 13 points 4 days ago* (last edited 4 days ago) (7 children)

To be fair, a lot of jobs in capitalist societies are indeed pointless. Some of them even actively do nothing but subtract value from society.

That said, people still need to make a living and his piece of shit artificial insanity is only making it more difficult. How about stop starving people to death and propose solutions to the problem?

load more comments (7 replies)
[–] technocrit@lemmy.dbzer0.com 11 points 5 days ago* (last edited 5 days ago) (4 children)

creating value

This kind of pseudo-science is a problem.

There is no such thing as "value". People serve capital so they don't starve to death. There will always be a need for servants. In particular capital needs massive guard labor to violently enforce privilege and inequality.

The technologies falsely hyped as "AI" are no different. It's just another computer program used by capital to hoard privilege and violently control people. The potential for unemployment is mostly just more bullshit. These grifters are literally talking about how "AI" will battle the anti-christ. Insofar as some people might maybe someday lost some jobs, that's been the way that capitalism works for centuries. The poor will be enlisted, attacked, removed, etc. as usual.

load more comments (4 replies)
[–] MangoCats@feddit.it 11 points 5 days ago

I have been working with computers, and networks, and the internet since the 1980s. Over this span of 40-ish years, "how I work" has evolved dramatically through changes in how computers work and more dramatically through changes in information availability. In 1988 if you wanted to program an RS-232 port to send and receive data, you read books. You physically traveled to libraries, or bookstores - maybe you might mail order one, but that was even slower. Compared to today the relative costs to gain the knowledge to be able to perform the task were enormous, in time invested, money spent, and physical resources (paper, gasoline, vehicle operating costs).

By 20 years ago, the internet had reformulated that equation tremendously. Near instant access to worldwide data, organized enough to be easier to access than a traditional library or bookstore, and you never needed to leave your chair to get it. There was still the investment of reading and understanding the material, and a not insignificant cost of finding the relevant material through search, but the process was accelerated from days or more to hours or less, depending on the nature of the learning task.

A year ago, AI hallucination rates made them curious toys for me - too unreliable to be of net practical value. Today, in the field of computer programming, the hallucination rate has dropped to a very interesting point: almost the same as working with a not-so-great but still useful human colleague. The difference being: where a human colleague might take 40 hours to perform a given task (not that the colleague is slow, just it's a 40 hour task for an average human worker), the AI can turn around the same programming task in 2 hours or less.

Humans make mistakes, they get off on their own tracks and waste time following dead ends. This is why we have meetings. Not that meetings are the answer to everything, but at least they keep us somewhat aware of what other members of the team are doing. That not so great programmer working on a 40 hour task is much more likely to create a valuable product if you check in with them every day or so, see "how's it going", help them clarify points of confusion, check their understanding and direction of work completed so far. That's 4 check points of 15 minutes to an hour in the middle of the 40 hour process. My newest AI colleagues are ripping through those 40 hour tasks in 2 hours, impressive, and when I don't put in the additional 2 hours of managing them through the process, they get off the rails, wrapped around the axles, unable to finish a perfectly reasonable task because their limited context windows don't keep all the important points in focus throughout the process. A bigger difficulty is that I don't get 23 hours of "offline wetware processing" between touch points to refine my own understanding of the problems and desired outcomes.

Humans have developed software development processes to help manage human shortcomings, humans' limited attention spans and memory. We still out-perform AI in some of this context window span thing, but we have our own non-zero hallucination rates. Asking an AI chatbot to write a program one conversational prompt at a time only gets me so far. Providing an AI with a more mature software development process to follow gets much farther. AI isn't following these processes (that it helped to translate from human concepts into its own language of workflows, skills, etc.) 100% perfectly, I catch it skipping steps in simple 5 step workflows, but like human procedures, there's a closed loop procedure improvement procedure to help perform better in the future.

Perhaps most importantly, the procedures are constantly reminding AI to be "self aware" of its context window limitations, do RAG (research augmented generation) of best practices for context management, DRY (reduce through non-repetition and use of references to single points of truth) its own procedures and documentation it generates. Will I succeed in having AI rebuild a 6 month project I did five years back, doing it better this time - expanding its scope to what would have been a year long development effort if I had continued doing it solo? Unclear, I'm two weeks in and I feel like I'm about where I was after two weeks of development last time, but it also feels like I have a better foundation to complete the bigger scope this time using the AI tools, and there's that tantalizing possibility that at any point now it might just take off and finish it by itself.

[–] remon@ani.social 10 points 4 days ago (1 children)

This guy needs to find Luigi.

load more comments (1 replies)
[–] Alphane_Moon@lemmy.world 10 points 5 days ago

I am starting to dislike Altman spam more than Elmo spam.

Regarding the philosophical points, there is some truth to the arguments, but one thing is absolutely certain (you can have zero knowledge of "AI" services to know that), you can't trust Americans in such matters.

[–] squaresinger@lemmy.world 9 points 5 days ago (2 children)

I agree with the sentiment, as bad as it feels to agree with Altman about anything.

I'm working as a software developer, working on the backend of the website/loyalty app of some large retailer.

My job is entirely useless. I mean, I'm doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the "super urgent" things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there's a real benefit to anyone.

The same is true for most of the jobs I worked in, and about most jobs in large corporations.

So if AI could somehow replace all these jobs (which it can't), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.

So AI doing the bullshit jobs isn't the problem, but people having to do bullshit jobs to get paid is.

If we all get a really good universal basic income or something, I don't think most people would mind that they don't have to go warm a seat in an office anymore. But since we don't and we likely won't in the future, losing a job is a real problem, which makes Altman's comment extremely insensitive.

load more comments (2 replies)
[–] Snowclone@lemmy.world 9 points 4 days ago (6 children)

I've worked for big corporations that employ a lot of people. Every job has a metric showing how much money every single task they do creates. Believe me. They would never pay you if your tasks didn't generate more money than they need to pay you to do the task.

load more comments (6 replies)
[–] xxce2AAb@feddit.dk 8 points 5 days ago

That's rich coming from the leader in the field of manufacturing demand out of whole cloth.

[–] Zwuzelmaus@feddit.org 8 points 5 days ago (1 children)

...and still they are throwing money at him, as fast as they can.

Mistake of the century.

load more comments (1 replies)
[–] lechekaflan@lemmy.world 8 points 4 days ago (1 children)

Thou shalt not make a machine in the likeness of a human mind.

-- The Orange Catholic Bible

Also, that pompous chucklefuck can go fuck himself. There are people who could barely feed themselves at less than a couple dollars per day.

load more comments (1 replies)
[–] biofaust@lemmy.world 8 points 4 days ago

That would actually be true if companies were run by the people doing the work.

[–] cmhe@lemmy.world 7 points 5 days ago

Apart from questionable quality of the result, a big issue to me about LLMs is the way it substitutes human interaction with other humans. Which is one of the most fundamental way humans learn, innovate and express themselves.

No technological innovation replaced human interaction with a facsimile, that way before.

load more comments
view more: next ›