this post was submitted on 25 Nov 2025
672 points (98.8% liked)

Technology

77072 readers
3101 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Generative “AI” data centers are gobbling up trillions of dollars in capital, not to mention heating up the planet like a microwave. As a result there’s a capacity crunch on memory production, shooting the prices for RAM sky high, over 100 percent in the last few months alone. Multiple stores are tired of adjusting the prices day to day, and won’t even display them. You find out how much it costs at checkout.

you are viewing a single comment's thread
view the rest of the comments
[–] Artisian@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

? Massive GPU server racks are relatively easy to repurpose for several things. The most likely (if sad) is crypto mining, but there's also expensive weather simulations, cloud gaming, video hosting, etc.

Requesting a source that these centers are hard to repurpose. I find myself pretty skeptical. Computers are generally multipurpose and easy to swap tasks on.

[–] Trainguyrom@reddthat.com 1 points 16 hours ago (1 children)

Is there enough demand for thousands of servers with purpose built ARM processors (which may or may not have any publicly available kernel support) driving 4-8 600w a pop Nvidia datacenter chips though? Yes some will be repurposed but there simply won't be the demand to fill immediately. Realistically what will happen is companies operating these datacenters will liquidate the racks, probably liquidate some of the datacenters entirely and thousands of servers will hit the secondhand market for next to nothing. While some datacenter structure city empty and unmaintained until they're either bought up to be repurposed, bought up to be refurbished and brought back into datacenter use of torn down, just like an empty Super Walmart location

Some of the datacenters will be reworked for general compute, maybe a couple will maintain some AI capacity, but given the sheer quantity of compute being stood up for the AI bubble and the sheer scale of the bubble, basically every major tech company is likely to shrink significantly when the bubble pops, since we're talking companies that currently have market caps measured in trillions, and literally a make up full quarter of the entire value of the New York Stock Exchange, it's going to be a bloodbath.

Remember how small the AI field was 6 years ago? It was purely the domain of academic research, fighting for scraps outside of a handful of companies big enough to invest in am AI engineer or two on the off chance they could make something useful for them. We're probably looking at a correction back down to nearly that scale. People who have drank the coolaid will wake up one day and realize how shit the output of generative AI is compared to the average professional's human work

[–] Artisian@lemmy.world 1 points 13 hours ago (1 children)

Thank you for fleshing out your world model and theory. I think that this model falls short of a source (and contradict some other AI-pessimistic economics predictions; namely a crash in computing cost and in crypto), but could be developed into something I'd find compelling.

Let me brainstorm aloud about what I think this world model predicts that we might have data on...

Did we see a crash in ISP prices, home and industry internet use, domain hosting, or other computing services in the dotcom bubble? That situation seems extremely analagous; but my vibe was that several of these did not drop (ISP price I suspect was stable), and some of these saw a dip but stayed well above early-internet rates (domain hosting)? I feel like there'd be a good analogy here, but I'm struggling with a way to operationalize.

I mentioned a use for compute that your reply didn't cover: crypto mining. Do we have evidence that the floor on crypto is well below datacenter operating costs (across exploitative coins as well)? I vaguely remember a headline in this direction. Another use case I don't see drying up: cheating on essay assignments.

More broadly, this model predicts that all compute avenues are much lower payoff than datacenter operating costs. I think I'd need to see this checked against an exhaustive HPC application list. I know that weather forecasting uses up about as much compute as AI for some supercomputing clusters.

Governments have already issued rather large grants to AI-driven academic projects. I suspect many of these are orders of magnitude larger than the size of academic AI 6 years ago. (I'll also quickly note that libraries are better than google search has ever been for finding true facts; yet google search has remained above library use throughout its existence.)

[–] Trainguyrom@reddthat.com 2 points 5 hours ago

Honestly the questions you're posing require a level of market analysis that could fill an entire white paper and be sold for way more money than I want to think about. Its a level of market analysis I don't want to dive into. My gut instinct from having worked in the tech industry, working with datacenters and datacenter hardware at large companies is that the AI industry will contract significantly when the bubble pops. I'm sure I could find real data to support this prediction but the level of analysis that would require and the hours of work are simply more than it's worth for an internet comment.

You have factors including what hardware is being deployed to meet AI bubble demand, how the networking might be setup differently for AI compared to general GPU compute, who is deploying what hardware, what the baseline demand for GPU compute is if you simulate no present AI bubble, etc. etc. it's super neat data analysis but I ain't got the time nor appetite for that right now