this post was submitted on 01 Nov 2025
        
      
      100 points (98.1% liked)
      Technology
    76553 readers
  
      
      2408 users here now
      This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
 - Only tech related news or articles.
 - Be excellent to each other!
 - Mod approved content bots can post up to 10 articles per day.
 - Threads asking for personal tech support may be deleted.
 - Politics threads may be removed.
 - No memes allowed as posts, OK to post as comments.
 - Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
 - Check for duplicates before posting, duplicates may be removed
 - Accounts 7 days and younger will have their posts automatically removed.
 
Approved Bots
        founded 2 years ago
      
      MODERATORS
      
    you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
Kinda odd. 8 GPUs to a CPU is pretty much standard, and less 'wasteful,' as the CPU ideally shouldn't do much for ML workloads.
Even wasted CPU aside, you generally want 8 GPUs to a pod for inference, so you can batch a model as much a possible without physically going 'outside' the server. It makes me wonder if they just can't put as much PCIe/NVLink on it as AMD can?
LPCAMM is sick though. So is the sheer compactness of this thing; I bet HPC folks will love it.
Yeah, 88/2 is weird as shit. Perhaps the GPUs are especially large? I know NVIDIA has that thing where you can slice up a GPU into smaller units (I can't remember what it's called, it's some fuckass TLA), so maybe they're counting on people doing that.
They could be 'doubled up' under the heatspreader, yeah, so kinda 4x GPUs to a CPU.
And yeah... perhaps they're maintaining CPU 'parity' with 2P EPYC for slicing it up into instances.