this post was submitted on 08 Mar 2025
191 points (98.0% liked)

Technology

65819 readers
4952 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 57 points 3 days ago (20 children)

If China ever goes hardcore on open source across the board for all hardware and software, it would absolutely crush the present Western hegemony. It would be the most moral high ground move too.

[–] vext01@lemmy.sdf.org 20 points 3 days ago (15 children)

From what ive heard, these risc v chips have a long way to go for performance. Is that still true?

[–] j4k3@lemmy.world 13 points 3 days ago* (last edited 3 days ago) (9 children)

Yes and no. It would have a long way to go for similar x86 single thread speeds. However, the future will belong to whatever single processor can handle all workloads. The dual processor workloads for GPU and CPU is a temporary hack. Around 8 years from now a new architecture will emerge as dominant. That is the 10 years it takes from idea to real hardware product in silicon. The problem has been obvious for 2 years already. The next architecture must be done from scratch on a level very similar to the gap between RISC-V and x86 right now. So ultimately, it is a no because that redesign renders the lead of the present useless. Present processors are power constrained for the L2 to L1 cache bus width. If all of those bits on the bus are high, it pulls the whole core down. This is where things are optimised for high speed single thread operations like traditional code. Large math tensors need a wide bus to load and offload quickly so it is entirely incompatible. Regardless of the merits of everyone running AI or not, in the data center business where there are very little profit margins, anyone that can make a single processor that can scale to handle both workloads well enough will win out in the long run. This dual processor paradigm has already been tried and failed. When x86 was in the x286 to x386 era a second floating point math unit was required for any advanced workloads like CAD. That created a dual processor architecture that resulted in a flop. Everyone in hardware is aware of this history. Why would anyone support a new grassroots proprietary hardware design for this new generation of hardware that requires a fortune in royalties if a similar processor is negligibly different at the same phase of development and is a free and open instruction set architecture with no royalties. Plus this means that the IC designer is no longer locked into an ecosystem of vendor peripherals. Anyone can design and sell little circuit blocks and on chip peripherals, even proprietary ones, for use on any chip. This is basically true open market capitalism for an ISA. It is a standardized framework for anyone to build on instead of the notoriously authoritarian, oppressive, and anticompetitive Intel. The outcome of that set of constraints seems obvious to me.

[–] enumerator4829@sh.itjust.works 2 points 3 days ago (2 children)

I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.

I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.

That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.

Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.

[–] j4k3@lemmy.world 1 points 3 days ago

Open AI is showing holes in the armor already. Open source always wins in the long term. There are many attempts to limit RISC-V adoption but if you look, even the old guard is putting chips on this board.

Having half a data center on a different architecture and load is untenable. Nvidia got lucky and is in a good position but that will only last 6-8 years at most. It is likely far less when China takes Taiwan and NK attacks SK at the same time. Nvidia has nothing without TSMC in Taiwan. That will leave only Intel and they are a train wreck that is relying on TSMC too. All of the Chip-Act fabs are trailing edge by the time they come online so those won't save Nvidia either. This is what the US voted for; massive taxifs and WW3 by 2030.

It will end up just like with AI in China. They are more agile and capable than the West imagines. They will pivot the chip limitations into the future. All of the American hegemony is based on layers upon layers of anticompetitive stagnation. Once those walls come down the future will move more quickly. All of these US companies are traitors as far as I am concerned. They outsourced at the expense of their neighbors and country. There are hundreds of thousands of homeless people in the USA. We have neo feudalism largely thanks to these shit companies. I hope they all crash and burn and will gladly buy Chinese.

Also, with the current posturing of the USA towards Europe, EUV may become much more available in China. The Chinese look a whole lot less like stupid fascist Nazis than the US does now. We are the ones creating massive human rights violations and burning down the world in rancid stupidity. There is no moral ground to stand on do don't expect ASML and the Dutch to feel all warm and fuzzy about US loyalties.

[–] bruhduh@lemmy.world 1 points 2 days ago

AMD APU approach already bearing fruit, PlayStation, Xbox, steam deck, strix halo, and their instinct datacenter cards, show that one chip approach is good as you've said

load more comments (6 replies)
load more comments (11 replies)
load more comments (15 replies)