this post was submitted on 05 Dec 2025
126 points (99.2% liked)

Not The Onion

18806 readers
156 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.nz/post/31404472

you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 24 points 1 day ago* (last edited 1 day ago) (1 children)

The RAM that's being produced at scale for parallel computation is HBM, what the capacity is going towards. It's not in the form of DIMMs


you can't take it after it's been used and stick it into a PC's motherboard.

EDIT:

https://developer.nvidia.com/blog/inside-nvidia-blackwell-ultra-the-chip-powering-the-ai-factory-era/

Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era

Blackwell Ultra doesn’t just scale compute—it scales memory capacity to meet the demands of the largest AI models. With 288 GB of HBM3e per GPU, it offers 3.6x more on-package memory than H100 and 50% more than Blackwell, as shown in Figure 5.

[–] kescusay@lemmy.world 3 points 1 day ago

True. So in six months, the market will be flooded with cheap, barely-used "AI" server hardware no one wants, and RAM for PCs will still be stupid expensive, because we live in the stupidest timeline.