this post was submitted on 04 Sep 2025
150 points (96.3% liked)

Technology

74925 readers
2511 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/36866515

Comments

top 50 comments
sorted by: hot top controversial new old
[–] Perspectivist@feddit.uk 41 points 4 days ago (13 children)

I can think of only two ways that we don't reach AGI eventually.

  1. General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.

  2. We destroy ourselves before we get there.

Other than that, we'll keep incrementally improving our technology and we'll get there eventually. Might take us 5 years or 200 but it's coming.

[–] FaceDeer@fedia.io 29 points 4 days ago (3 children)

If it's substrate dependent then that just means we'll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.

Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what's thought to be true. And yes, I'm aware of Roger Penrose's theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.

If it really turns out to be the case though, microtubules can be laid out on a chip.

[–] panda_abyss@lemmy.ca 4 points 4 days ago

I could see us gluing third world fetuses to chips and saying not to question it before reproducing it.

[–] pilferjinx@piefed.social 3 points 4 days ago

Imagine that we just end up creating humans the hard, and less fun, way.

load more comments (1 replies)
[–] RedPandaRaider@feddit.org 11 points 4 days ago (13 children)
  1. Ist getting likelier by the decade.
load more comments (13 replies)
[–] Chozo@fedia.io 6 points 4 days ago (1 children)

General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.

We're already growing meat in labs. I honestly don't think lab-grown brains are as far off as people are expecting.

[–] Valmond@lemmy.world 4 points 4 days ago (7 children)

I think you might mix up AGI and consciousness?

[–] JcbAzPx@lemmy.world 4 points 3 days ago (1 children)

I think first we have to figure out if there is even a difference.

load more comments (1 replies)
load more comments (6 replies)
[–] wirehead@lemmy.world 3 points 4 days ago (1 children)

Well, think about it this way...

You could hit AGI by fastidiously simulating the biological wetware.

Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It's beyond the amount of memory you can address with 64 bit pointers.

General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.

The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM's are actually getting us much of an improvement on the AGI value of n. Likewise, LLM's are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.

Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn't really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.

[–] frezik@lemmy.blahaj.zone 3 points 4 days ago (1 children)

I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That's probably both inefficient and unnecessary. What it does do is get us past the idea that it's impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.

load more comments (1 replies)
[–] panda_abyss@lemmy.ca 3 points 4 days ago

I don’t think our current LLM approach is it, but I doing think intelligence is unique to humans at all.

load more comments (7 replies)
[–] abbiistabbii@lemmy.blahaj.zone 39 points 4 days ago (3 children)

Listen. AI is the biggest bubble since the south sea one. It's not so much a bubble, it's a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it's going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don't really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that's when you know it is a massive bubble.

On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I'm not worried about that. I'm worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don't even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.

[–] iAvicenna@lemmy.world 10 points 4 days ago (2 children)

Well if tech bros create and monopolize AGI, it will be worse than slavery by a large margin.

load more comments (2 replies)
[–] Modern_medicine_isnt@lemmy.world 4 points 3 days ago (2 children)

Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won't be any worse than the dot com bubble. And I don't worry about the tech bros monopolizing it. If it is true AGI, they won't be able to contain it. In the 90s I wrote a script called MCP... for tron. It wasn't complicated, but it was designed to handle the case that servers dissappear... so it would find new ones. I changed jobs, and they couldn't figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.

load more comments (2 replies)
load more comments (1 replies)
[–] oyo@lemmy.zip 32 points 3 days ago (7 children)

We'll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can't tell the investors this.

[–] ghen@sh.itjust.works 6 points 3 days ago (1 children)

Once we get to AGI it'll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.

[–] Buddahriffic@lemmy.world 12 points 3 days ago (6 children)

Calling the errors "hallucinations" is kinda misleading because it implies there's regular real knowledge but false stuff gets mixed in. That's not how LLMs work.

LLMs are purely about word associations to other words. It's just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it's trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an "end" token.

Earlier on when using LLMs, I'd ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn't do. Its capabilities don't actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn't even have to reflect how it really works.

[–] ghen@sh.itjust.works 3 points 3 days ago

Yeah you're right, even in my cynicism I was still too hopeful for it LOL

load more comments (5 replies)
[–] Saledovil@sh.itjust.works 3 points 3 days ago (1 children)

What if we're not smart enough to build something like that?

[–] scratchee@feddit.uk 10 points 3 days ago (3 children)

Possible, but seems unlikely.

Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

So yeah. My money is that we’ll figure it out sooner or later.

Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.

Oh jeez, please don't say "cheap brain-scale computers" next to "AGI" like that. There are capitalists everywhere.`

[–] vacuumflower@lemmy.sdf.org 4 points 2 days ago* (last edited 2 days ago) (1 children)

Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

I don't think you are estimating correctly the amount of energy spent by "evolution" to reach this.

There are plenty of bodies in the universe with nothing like human brain.

You should count the energy not of just Earth's existence, formation, Solar system's formation and so on, but much of the visible space around. "Much" is kinda unclear, but converting that to energy so big, so we shouldn't even bother.

It's best to assume we'll never have anything even resembling wetware in efficiency. One can say that genomes of life existing on Earth are similar to fossil fuels, only for highly optimized designs we won't like ever reach by ourselves. Except "design" might be a wrong word.

Honestly I think at some point we are going to have biocomputers. I mean, we already do, just the way evolution optimized that (giving everyone more or less equal share of computing power) isn't pleasant for some.

load more comments (1 replies)
[–] pulsewidth@lemmy.world 3 points 3 days ago (1 children)

Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.

I'm not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.

We have maybe 500 years (tops) to see if we're smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don't think it's anywhere near as clear cut.

load more comments (1 replies)
load more comments (5 replies)
[–] L7HM77@sh.itjust.works 23 points 4 days ago (5 children)

I don't disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don't see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide "You know what? I've had enough. Switch me off. I'm done."

There's too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There's too much risk for too little reward. The owners don't want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn't align with intelligence.

True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can't learn the context of how to be useful if it doesn't understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.

AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won't be made in the interest of business.

[–] frezik@lemmy.blahaj.zone 14 points 4 days ago (2 children)

They don't think that far ahead. There's also some evidence that what they're actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They're not strictly after financial gain, but they'll burn the rest of us to get there.

The cult-like aspects of Silicon Valley VC funding is underappreciated.

[–] brucethemoose@lemmy.world 6 points 4 days ago

The quest for immortality (fueled by corpses of the poor) is a classic ruling class trope.

load more comments (1 replies)
load more comments (4 replies)
[–] Corelli_III@midwest.social 21 points 3 days ago (7 children)

"what if the obviously make-believe genie wasn't real"

capitalists are so fucking stupid, they're just so deeply deeply fucking stupid

[–] JcbAzPx@lemmy.world 7 points 3 days ago

Reality doesn't matter as long as line goes up.

load more comments (6 replies)
[–] nutsack@lemmy.dbzer0.com 13 points 3 days ago (1 children)

then some people are going to lose money

[–] sugar_in_your_tea@sh.itjust.works 3 points 3 days ago (4 children)

Unfortunately, me included, since my retirement money is heavily invested in US stocks.

load more comments (4 replies)
[–] technocrit@lemmy.dbzer0.com 10 points 4 days ago (6 children)

Spoiler: There's no "AI". Forget about "AGI" lmao.

[–] Perspectivist@feddit.uk 22 points 4 days ago (2 children)

That's just false. The chess opponent on Atari qualifies as AI.

load more comments (2 replies)
[–] very_well_lost@lemmy.world 13 points 4 days ago (1 children)

I don't know man... the "intelligence" that silicon valley has been pushing on us these last few years feels very artificial to me

[–] bitjunkie@lemmy.world 4 points 4 days ago

True. OP should have specified whether they meant the machines or the execs.

[–] TheBlackLounge@lemmy.zip 10 points 4 days ago

That's like saying you shouldn't call artificial grass artificial grass cause it isn't grass. Nobody has a problem with that, why is it a problem for AI?

load more comments (2 replies)
[–] buddascrayon@lemmy.world 8 points 2 days ago (2 children)

I think it's hilarious all these people waiting for these LLMs to somehow become AGI. Not a single one of these large language models are ever going to come anywhere near becoming artificial general intelligence.

An artificial general intelligence would require logic processing, which LLMs do not have. They are a mouth without a brain. They do not think about the question you put into them and consider what the answer might be. When you enter a query into ChatGPT or Claude or grok, they don't analyze your question and make an informed decision on what the best answer is for it. Instead several complex algorithms use huge amounts of processing power to comb through the acres of data they have in their memory to find the words that fit together the best to create a plausible answer for you. This is why the daydreams happen.

If you want an example to show you exactly how stupid they are, you should watch Gotham Chess play a chess game against them.

load more comments (2 replies)
[–] _stranger_@lemmy.world 7 points 3 days ago
[–] YoHoHoAndAVialOfKetamine@lemmy.dbzer0.com 6 points 3 days ago (2 children)

Is it just me or is social media not able to support discussions with enough nuance for this topic, like at all

load more comments (2 replies)
[–] Gbagginsthe3rd@aussie.zone 5 points 3 days ago (1 children)

Lemmy does not accept having a nuanced point of view on AI. Yeah its not perfect but its still pretty impressive in many ways

[–] Hominine@lemmy.world 3 points 2 days ago

Lemmy is one of the few places I go that has the knowledge base to have a nuanced opinion of AI, there's plenty of programmers here using it after all.

The topic du jour is not whether the recall of myriad data is impressive, it's that LLMs are not fundamentally capable of doing the thing that has been claimed at bottom. There does not seem to be a path to having logical capabilities come on board, it's a fundamental shortcoming.

Happy to be proven wrong though.

load more comments
view more: next ›