this post was submitted on 01 Nov 2025
91 points (97.9% liked)

Technology

76530 readers
3179 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (2 children)

Kinda odd. 8 GPUs to a CPU is pretty much standard, and less 'wasteful,' as the CPU ideally shouldn't do much for ML workloads.

Even wasted CPU aside, you generally want 8 GPUs to a pod for inference, so you can batch a model as much a possible without physically going 'outside' the server. It makes me wonder if they just can't put as much PCIe/NVLink on it as AMD can?

LPCAMM is sick though. So is the sheer compactness of this thing; I bet HPC folks will love it.

[–] Badabinski@kbin.earth 3 points 1 day ago (1 children)

Yeah, 88/2 is weird as shit. Perhaps the GPUs are especially large? I know NVIDIA has that thing where you can slice up a GPU into smaller units (I can't remember what it's called, it's some fuckass TLA), so maybe they're counting on people doing that.

[–] brucethemoose@lemmy.world 2 points 1 day ago

They could be 'doubled up' under the heatspreader, yeah, so kinda 4x GPUs to a CPU.

And yeah... perhaps they're maintaining CPU 'parity' with 2P EPYC for slicing it up into instances.