this post was submitted on 28 Dec 2025
305 points (98.4% liked)

News

33815 readers
2325 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.

Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.

“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”

top 47 comments
sorted by: hot top controversial new old
[–] givesomefucks@lemmy.world 62 points 1 day ago (3 children)

“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”

Because if people could think more than 2 months ahead we wouldn't be in this spot to begin with...

Resource scarcity makes you lose long term planning though, which is why we keep getting squeezed so hard. If people had time to step back and realize where we're headed, we'd change direction really fucking quick.

[–] BlameThePeacock@lemmy.ca 24 points 1 day ago (1 children)

We haven't done it on climate change, why the hell would people change for ai?

[–] SendMePhotos@lemmy.world 5 points 1 day ago (1 children)

Yeah but we did it on the ozone

[–] SaltySalamander@fedia.io 2 points 8 hours ago (1 children)

That took gov't intervention. Corps certainly didn't choose to stop using those ozone-damaging products out of the collective goodness of their hearts. The current gov't either doesn't have the spine or has no desire whatsoever, depending on which side you're talking about.

[–] jj4211@lemmy.world 2 points 6 hours ago

And we didn't have CFC deniers with huge social media platform amplifying fringe conspiracy theories into big political platforms.

Of course, the dangerous CFCs weren't as critical, if anything new formulations were easy business opportunities for established players. There's no easy pivot from being a big fossil fuel company to a replacement, any attempt to do so comes with huge risk of being distributed by an unexpected competitor.

[–] Fuckfuckmyfuckingass@lemmy.world 2 points 1 day ago (1 children)

Username checks out. I'm so tired, Givesomefucks, so very tired.

[–] givesomefucks@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

On the bright side this the most optimistic I've been in 30 years, except like 6 months after Obama won the first time.

It's a catch-22 where the people legitimately won back the DNC, but the masses are still ignorant of it because the billionaires who own are media don't want anyone to hear about it.

The DNC is giving insane levels of money back to state parties after neoliberals robbed them for a decade with the victory fund grift. We're already seeing the results of that, and midterms are going to be huge.

Then we'll get a fair primary for the presidential, with a very very good chance of a progressive getting in with huge majorities in the House and Senate.

Like, shits bad now. But it's still not as bad as the Great Depression, and that shit got us FDR.

There's a very good chance we're about to hit a massive upswing.

We just got to remember that the peaceful option of FDR and policy wasn't the only option on the table:

https://en.wikipedia.org/wiki/Bonus_Army

Just like Martin wouldn't have won without Malcolm, peaceful societial change almost always requires at least the threat of hypothetical violence along with it.

We were damn close to a civil war right when FDR got elected.

When shit gets bad enough, the best short term plan for the majority is often "gang up on the people that have everything". Because that's the only people who have anything

[–] I_Has_A_Hat@lemmy.world 1 points 5 hours ago

Hey now, that's unfair; people think more than 2 months ahead!

They think 3 months ahead, correlating with the next Quarterly report.

[–] SapphironZA@sh.itjust.works 37 points 1 day ago* (last edited 23 hours ago) (5 children)

People forget that a very short time ago, there was no internet and knowlege and fast communication was rare and slow.

Nothing in the last 500 years has changed our society so much, in such a short time.

AI has been around for decades now. The main recent breakthrough has been in its ability to immitate a human conversation. Untill a similar breakthough happens in its ability to reason and understand, it will continue to stagnate.

Right now AI development needs the current bubble to pop before significant progress can be made.

[–] deacon@lemmy.world 20 points 1 day ago (2 children)

I sincerely believe that our advancement in technology has outpaced our evolution and we are simply not equipped to wield it yet.

[–] fonix232@fedia.io 15 points 1 day ago

The issue is that our political systems still haven't caught up to the internet - they're at least 30 years behind everything.

This means that any effective change is much slower than the advancements being made, making it incredibly hard to legislate. To date the US is debating if using copyrighted material for training AI is a breach of copyright or not, hell, some morons are even claiming that artists shouldn't have the right to licence their own art under a no-AI licence!

[–] SapphironZA@sh.itjust.works 5 points 23 hours ago

Its certainly apparrent in how much our brains are struggeling to process the level of information we have access to.

You can see that in many areas of society, people have given up on facts based reasoning and are simply following "vibes" in their decision making that is a lot "cheaper" on the brain.

[–] fonix232@fedia.io 14 points 1 day ago (1 children)

LLMs don't just imitate human speech. They do much more in application - and that IS already displacing people, people who can't just "find a new job". People in call centers, (remote) customer support, personal assistants, and so on.

And then we haven't even touched down on how it's changing IT. Software development alone is seeing massive changed with more and more code being AI generated, more and more functionality being offloaded to AI, which improves individual performance, allowing companies to cut down on manforce. The issue with that? There aren't enough employers who could pick up those displaced people.

Oh and then we haven't addressed the fact that this AI displacement is also affecting future generations of these jobs. In software development, there's already a shift from interns and juniors to AI, because it's cheaper. This means that out of 100 fresh starters, maybe, maybe ten will get the chance to actually gain experience and progress anywhere, the rest are being discarded because AI is cheaper and "better" at those tasks.

Previous industrial shifts have caused similar displacement, but those were slow processes. The most well known example would be the luddites going against the mechanical loom. While the luddites weren't right about it, as handmade clothing has increased in price AND the displaced people were re-trained to manage the looms, that was also a slow process as the looms themselves were expensive, took time to replace manual workers, so not all textile factories could afford them, and demand was there for the increased capacity.

Compare it with today's AI shift, and there's a clear distinction - within 3-4 years of LLMs showing up, we are on the verge of a potential societal collapse due to everyone and their mum trying to implement AI everywhere, even (especially!) in places it's not needed. This speed, this adoption rate is simply not sustainable without planning for the displaced people. Because if UBI doesn't happen, we're truly looking at the most exposed bottom ~30% of earners (and even a big number of high earners!) not having any sort of income or the ability to get income, and things will mirror the situation a century ago, kick-starting another great depression but exacerbated by factors like much lower property ownership (yay private equity buying up residential properties to rent them out at extortionate prices), much higher cost of living, and so on.

And we all know what the effects of the Great Depression culminated into. War, famine, ruin.

[–] cheesybuddha@lemmy.world 1 points 3 hours ago

AI will displace all the junior devs, companies will only need a few knowledgeable seniors on staff. Then when they retire, there won't be any other people in the pipeline. Then they'll die and nobody will know how the machines operate. Yada yada yada, we all become battery cells in Matrix farms

[–] krooklochurm@lemmy.ca 8 points 1 day ago (1 children)

The thing is that technology is not linear.

That could happen tomorrow.

It might never happen.

It likely isn't going to happen with LLMs but the next big breakthrough could happen at any time. Or never.

[–] Ach@lemmy.world 4 points 1 day ago (1 children)

I very respectfully but firmly disagree.

Human progression isn't just advancing, it's accelerating. If you were born in 1700 and died in 1775, basically everything at the time of your death was identical to the time of your birth.

If you were born in 1900 and died in 1975, you were born to horse-drawn carriages and died after seeing a man walk on the moon.

Now, even though our "AI" isn't real AI and just language models, it can still crunch numbers historically faster. So the acceleration is objectively going to accelerate.

[–] krooklochurm@lemmy.ca 9 points 1 day ago (1 children)

I sincerely don't understand how anything you wrote with disagrees with my comment about technological advancement not being linear.

[–] Ach@lemmy.world 6 points 1 day ago (1 children)

Fair point, sorry - I didn't word it well. My bad.

You seem to think something bad might happen, I think that stone is already moving and can't be stopped.

[–] krooklochurm@lemmy.ca 6 points 1 day ago (1 children)

It might or it might not.

I agree that it will likely will given the insane progression in ai models of every kind and the absurd amount of money being invested into it, but it's not a certainty.

LLMs are likely a dead end but anyone that thinks the buck stops there is an idiot.

[–] Ach@lemmy.world -4 points 1 day ago* (last edited 1 day ago) (1 children)

I'd have to disagree that LLM are a dead end. They aren't actual AI, but they can crunch data at a rate that will make them a bridge to actual AI. I guess I see this is a very dangerous and inevitable stepping-stone.

LLM will be able to crunch raw numbers to make actual AI possible IMHO.

[–] krooklochurm@lemmy.ca 5 points 1 day ago (1 children)

You keep saying number crunching - gpus "crunch numbers" cpus "crunch numbers" ai models ARE numbers.

[–] Ach@lemmy.world -4 points 1 day ago

Are you denying that it can be done faster now? And that even if it can't, people with money believe it can and are funding it?

This is moving fast my dude. Look at how fast a term from Terminator made it into our daily lives.

[–] AcidiclyBasicGlitch@sh.itjust.works 4 points 5 hours ago* (last edited 5 hours ago)

You also need it to pop so the billionaires who want to gatekeep and control the future of AI (in order to make sure their names are attached to that future) can be knocked off the fucking pedestal they keep trying to place themselves on. They suck and they're making everything else suck too.

Imagine how much weirder and shittier the U.S. would currently be if only Steve Jobs and a handful of out of touch billionaires had been given this much fucking control of all future technology in the 80s.

As if attaching yourself to something successful before anyone else did a long fucking time ago, then gives you some kind of lifetime special status allowing you to decide what's best for all of humanity. Like it's only reasonable you be allowed to ignore whatever regulations you want while using your power to enforce other regulations that protect your monopoly. You're not being a self centered prick, you're "saving humanity from stagnation," while somehow being completely oblivious you and your mediocre state protected monopolies are the fucking stagnation that is plaguing society and fucking decimating hundreds of years of progress as fast as you possibly can.

[–] auraithx@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago) (2 children)

The reasoning models were the breakthrough in its ability to reason and understand?

AI has solved 50-year-old grand challenges in biology. AlphaFold has predicted the structures of nearly all known proteins, a feat of "understanding" molecular geometry that will accelerate drug discovery by decades.

We aren't just seeing a "faster horse" in communication; we are seeing the birth of General Purpose Technologies that can perform cognitive labor. Stagnation is unlikely because, unlike the internet (which moved information), AI is beginning to generate solutions.

  1. Protein folding solved at near-experimental accuracy, breaking a 50-year bottleneck in biology and turning structure prediction into a largely solved problem at scale.

  2. Prediction and public release of structures for nearly all known proteins, covering the entire catalogued proteome rather than a narrow benchmark set.

  3. Proteome-wide prediction of missense mutation effects, enabling large-scale disease variant interpretation that was previously impossible by human analysis alone.

  4. Weather forecasting models that outperform leading physics-based systems on many accuracy metrics while running orders of magnitude faster.

  5. Probabilistic weather forecasting that exceeds the skill of top operational ensemble models, improving uncertainty estimation, not just point forecasts.

  6. Formal mathematical proof generation at Olympiad level difficulty, producing verifiable proofs rather than heuristic or approximate solutions.

  7. Discovery of new low-level algorithms, including faster sorting routines, that were good enough to be merged into production compiler libraries.

  8. Discovery of improved matrix multiplication algorithms, advancing a problem where progress had been extremely slow for decades.

  9. Superhuman long-horizon strategic planning in Go, a domain where brute force search is infeasible and abstraction is required.

  10. Identification of novel antibiotic candidates by searching chemical spaces far beyond what human-led methods can feasibly explore.

[–] SapphironZA@sh.itjust.works 4 points 23 hours ago (1 children)

Thank you for raising these points. Progress has certainly been made and in specific applications, AI tools has resulted in breakthoughs.

The question is wheither it was transformative, or just incremental improvements, i.e. a faster horse.

I would also argue that there is a significant distinction between predictive AI systems in the application of analysis and the use of LLM. The former has been responsible for the majority of the breakthroughs in the application of AI, yet the latter is getting all the recent attention and investment.

Its part of the reason why I think the current AI bubble is holding back AI development. So much investment is being made for the sake of extracting wealth from individials and investment vehicles, rather than in something that will be beneficial in the long term.

Predictive AI (old AI) overall is certainly going to be a transformative technology as it has already proven over the last 40 years.

I would argue what most people call AIs today, LLMs are not going to be transformative. It does a very good imitation of human language, but it completely lacks the ability to reason beyond the information it is trained on. There has been some progress with building specific modules for completing certain analytical tasks, like mathematics and statistical analysis, but not in the ability to reason.

It might be possible to do that through brute force in a sufficiently large LLM, but I strongly suspect we lack the global computing power by a few orders of magnatude before we get to a mammilian brain and the number of connections it can make.

But even if you could, we also need to improve power generation and efficiency by a few orders of magnatude as well.

I would love to see the AI bubble pop, so that the truely transformative work can progress, rather than the current "how do we extract wealth" focus of AI. So much of what is happening now is the same as the dot com bubble, but at a much larger scale.

[–] auraithx@lemmy.dbzer0.com 1 points 7 hours ago* (last edited 7 hours ago)

You’re assuming that transformation only counts when it yields visible scientific breakthroughs. That overlooks how many technologies reshape economies by compressing time, labor, and coordination across everyday work. When a tool removes friction from millions of small interactions, its cumulative effect can be structural even if each individual use feels modest, much like spreadsheets, search engines, or email once did.

The distinction between predictive systems and LLMs is broadly right, but in practice the boundary is porous. Most high-impact AI systems still rely on classical predictive models, optimization methods, and domain-specific algorithms, while LLMs increasingly act as a control and translation layer. They map ambiguous human intent into structured actions, route tasks across tools, and integrate heterogeneous systems that previously required expert interfaces. This does not make LLMs the source of breakthroughs, but it does make them central to how breakthroughs scale, combine, and reach non-experts.

The reasoning critique strengthens when framed around control and guarantees rather than capability. LLMs do generalize to new problems, so their limitation is not simple memorization. Their reasoning emerges from next-token prediction, not from an explicit objective tied to truth, proof, or logical consistency. This architecture optimizes for plausibility and coherence, sometimes producing fluent but unfounded claims. The problem is not that LLMs reason poorly, but that they reason without dependable constraints.

The hallucination problem can be substantially reduced, but within a single LLM it cannot be eliminated. That limit, however, applies to models, not necessarily to systems. Multi-model and hybrid architectures already point toward ways of approaching near-perfect reliability. Retrieval and grounding modules can verify claims against live data, tool use can offload factual and computational tasks to systems with hard guarantees, and ensembles of models can cross-check, critique, and converge on shared answers. In such configurations, the LLM serves as a reasoning interface while external components enforce truth and precision. The remaining difficulty lies in coordination, ensuring that every step, claim, and interpretation remains tied to verifiable evidence. Even then, edge cases, underspecified prompts, or novel domains can reintroduce small error rates. But in principle, hallucination can be driven to vanishingly low levels when language models are treated as parts of truth-preserving systems rather than isolated generators.

The compute and energy debate is directionally sensible but unsettled. It assumes progress through brute-force scaling toward brain-like complexity, yet history shows that architectural shifts, hybridization, and efficiency gains often reset apparent limits. Real constraints are likely, but their location and severity remain uncertain.

Where your argument is strongest is on incentives. The current investment cycle undoubtedly rewards short-term monetisation and narrative dominance over long-term scientific and infrastructural progress. This dynamic can crowd out foundational research in safety, evaluation, and interpretability. Yet, as in past bubbles, the aftermath tends to leave behind useful assets, tools, datasets, compute capacity, and talent, that more serious work can build upon once the hype cools.

[–] jj4211@lemmy.world 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

The "reasoning" models aren't really reasoning, they are generating text that resembles "train of thought". If you examine some of the reasoning chains with errors, you can see some errors are often completely isolated, with no lead up and then the chain carries on as if the mistake never happened. Errors that when they happen in an actual human reasoning chain propagate.

LLM reasoning chains are generating essentially fanfics of what reasoning would look like. It turns out that expending tokens to generate more text and discarding it does make the retained text more more likely to be consistent with desired output, but "reasoning" is more a marketing term than describing what is really happening.

[–] auraithx@lemmy.dbzer0.com 3 points 5 hours ago* (last edited 4 hours ago)

LLMs do not reason in the human sense of maintaining internal truth states or causal chains, sure. They predict continuations of text, not proofs of thought. But that does not make the process ‘fake’. Through scale and training, they learn statistical patterns that encode the structure of reasoning itself, and when prompted to show their work they often reconstruct chains that reflect genuine intermediate computation rather than simple imitation.

Stating that some errors appear isolated is fair, but the conclusion drawn from it is not. Human reasoning also produces slips that fail to propagate because we rebuild coherence as we go. LLMs behave in a similar way at a linguistic level. They have no persistent beliefs to corrupt, so an error can vanish at the next token rather than spread. The absence of error propagation does not prove the absence of reasoning. It shows that reasoning in these systems is reconstructed on the fly rather than carried as a durable mental state.

Calling it marketing misses what matters. LLMs generate text that functions as a working simulation of reasoning, and that simulation produces valid inferences across a broad range of problems. It is not human thought, but it is not empty performance either. It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.

[–] JayDee@lemmy.sdf.org 8 points 1 day ago

I would disagree. Many other technologies have eliminated more jobs, caused more damage to society and the environment, and been more generally consequential. AI has been bad kn all those ways, but is by no means the worst of them all. Let's not forget that we're still dealing with the social damage and ripple effects of the invention of the atomic bomb, and that previous video and audio manipulation tools had already severely damaged social trust in media. LLMs have just worsened those already significantly damaged systems.

[–] Tehbaz@lemmy.wtf 7 points 1 day ago

The way things are going, the working classes around the world will need to start killing the rich once their AI takes our incomes away.

[–] cheesybuddha@lemmy.world 3 points 3 hours ago

I think the internet has it beat. Without the internet, "AI" wouldn't be nearly as ubiquitous. Or as useful, I'd wager, especially in all the wrong ways.

[–] MiddleAgesModem@lemmy.world 3 points 3 hours ago

Nuclear weapons be damned.

[–] NoneOfUrBusiness@fedia.io 2 points 1 day ago

Kid named steam engine:

[–] Nalivai@lemmy.world 1 points 1 hour ago

Bernie is still using old, sensible definition of the word AI, back when it meant something, back when we could hold philosophical conversations about the future, and technical conversations about applications of "machine learning".
It's all gone now, AI doesn't exist, can't do shit, but will ruin our lives anyway

[–] sin_free_for_00_days@sopuli.xyz -2 points 1 day ago (2 children)

And the senator called it “the most consequential technology in the history of humanity”

The wheel. Fire. The steam engine. Easy access to porn. I could go on.

[–] IronBird@lemmy.world 8 points 1 day ago

man was not meant to see bukkake gangbangs in their teens, it sets expectations way too high

[–] Almacca@aussie.zone 2 points 3 hours ago

The wheel. Fire. The steam engine. Easy access to porn. I could go on

... sliced bread.