this post was submitted on 29 Mar 2025
213 points (94.9% liked)

Technology

68187 readers
3703 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 37 comments
sorted by: hot top controversial new old
[–] Grandwolf319@sh.itjust.works 62 points 4 days ago (1 children)

A post about technology on the technology community?

What year is this?

Yeah, I didn't see Elon Musk, Trump, or AI mentioned at all. What's happening?

[–] commander@lemmy.world 54 points 4 days ago (2 children)

I'm sure there are data science/center people that can appreciate this. For me all I'm thinking is how hot it runs and how much I wish soon 20TB SSDs would be priced like HDDs

[–] jlh@lemmy.jlh.name 12 points 4 days ago (3 children)

nah datacenters care more about capacity or iops, throughput is meaningless, since you'll always be bottlenecked by network

[–] aleq@lemmy.world 10 points 4 days ago (1 children)

Not necessarily if you run workloads within the datacenter? Surely that's not that rare, even if they're mostly for hosting web services.

[–] jlh@lemmy.jlh.name 7 points 4 days ago* (last edited 4 days ago) (1 children)

Yeah but 15 GB/s is 120 gbit. Your storage nodes are going to need more than 2x800gbit if you want to take advantage of the bandwidth once you start putting more than 14 drives in. Also, those 14 drives probably won't have more than 30M iops. Your typical 2U storage node is going to have something like 24 drives, so you'll probably be bottlenecked by bandwidth or iops no matter if you put in 15GB/s drives or 7GB/s drives.

Maybe it makes sense these days, I haven't seen any big storage servers myself, I'm usually working with cloud or lab environments.

[–] Aceticon@lemmy.dbzer0.com 3 points 4 days ago

If what you're doing is database queries on large datasets, the network speed is not even close to the bottleneck unless you have a really dumbly partitioned cluster (in which case you need to fire your systems designer and your DBA).

There are more kinds of loads than just serving static data over a network.

[–] Valmond@lemmy.world 6 points 4 days ago (1 children)
[–] Albbi@lemmy.ca 2 points 4 days ago (1 children)

I work in bioinformatics. The faster the hard drive the better! Some of my recent jobs were running some poorly optimized code and would turn 1tb of data into 10tb of output. So painful to run with 36 replicates.

[–] Valmond@lemmy.world 3 points 4 days ago

Are you hiring ^^ ?

Love that kind if stuff.

[–] randombullet@programming.dev 1 points 4 days ago

A lot are moving through software defined networking which runs at RAM speeds.

But typically responsiveness is quite important in a virtualized environment.

InfiniBand could run theoretically at 2400gbps which is 300GB/s.

[–] kkj@lemmy.dbzer0.com 3 points 4 days ago (1 children)

Agreed. I'd happily settle for 1GB/s, maybe even less, if I could get the random seek times, power usage, durability, and density of SSDs without paying through the nose.

[–] commander@lemmy.world 2 points 3 days ago

I'd be more than happy with 1GB/s drives for storage. I'd be happy with SATA3 SSD speeds. I'd be happy if they were still sized like a 2.5" drive. USB4 ports go up to 80Gb/s. I'd be happy with an external drive bay with each slot doing 1 GB/s

[–] Thrashy@lemmy.world 29 points 4 days ago* (last edited 4 days ago) (4 children)

The trouble with ridiculous R/W numbers like these is not that there's no theoretical benefit to faster storage, it's that the quoted numbers are always for sequential access, whereas most desktop workloads are more frequently closer to random, which flash memory kinda sucks at. Even really good SSDs only deliver ~100MB/sec in pure random access scenarios. This is why you don't really feel any difference between a decent PCIe 3.0 M.2 drive and one of these insane-o PCI-E 5.0 drives, unless you're doing a lot of bulk copying of large files on a regular basis.

It's also why Intel Optane drives became the steal of the century when they went on clearance after Intel abandoned the tech. Optane is basically as fast in random access as in sequential access, which means that in some scenarios even a PCIe 3.0 Optane drive can feel much, much snappier than a PCIe 4 .0 or 5.0 SSD that looks faster on paper.

[–] Gg901@lemmy.world 12 points 4 days ago (2 children)

Why was Optane so good with random access? Why did Intel abandon the tech?

[–] Welp_im_damned@lemdro.id 2 points 4 days ago

Intel became broke and they had to cut it.

[–] rice@lemmy.org 2 points 4 days ago* (last edited 4 days ago)

didn't sell well. I assume if they were able to combine it with todays need for NVRAM on a GPU for AI they would have gotten it sold a bunch. I am surprised we don't see "pcie ram expansion pack" for the GPUs from nvidia yet

This is all a lot easier created than it is to make the software for

[–] kkj@lemmy.dbzer0.com 12 points 4 days ago (1 children)

which flash memory kinda sucks at.

Au contraire, flash is amazing at random R/W compared to all previous non-volatile technologies. The fastest hard drives can do what, 4MB/s with 4k sectors, assuming a quarter rotation per random seek? And that's still fantastic compared to optical media, which in turn is way better than tape.

Obviously, volatile memory like SDRAM puts it to shame, but I'm a pretty big fan of being able to reboot.

[–] Thrashy@lemmy.world 2 points 4 days ago

Fair point. My thrust was more that the reason why things like system boot times and software launch speeds don't seem to benefit as much as they seem like they should when moving from, say, a good SATA SSD (peak R/W speed: 600 MB/sec) to a fast m.2 that might have listed speeds 20+ times faster, is that QD1 performance of that m.2 drive might only be 3 or 4 times better than the SATA drive. Both are a big step up from spinning media, but the gap between the two in random read speed isn't big enough to make a huge subjective difference in many desktop use cases.

[–] Eideen@lemmy.world 3 points 4 days ago

Agree 1 lane of pci4.0 per M.2 SSD is enough.

Give me more slots instead.

[–] SharkAttak@kbin.melroy.org 2 points 4 days ago

Not to forget that I'd be very cautious about the stratosferic claims of a never heard before chinese manufacturer...

[–] OmegaLemmy@discuss.online 9 points 4 days ago (1 children)

Ah. It's... Six times faster than my sdd that was already fast. This runs faster than some ram. God damn.

[–] CheeseNoodle@lemmy.world 3 points 3 days ago

Which is the ultimate goal I think, if your main storage is already as fast as RAM then you just don't need RAM anymore and also can't run out of memory in most cases since the whole program is functionally already loaded.

[–] kamen@lemmy.world 8 points 4 days ago (1 children)

IMO another example of pushing numbers ahead of what's actually needed, and benefitting manufacturers way more than the end user. Get this for bragging rights? Sure, you do you. Some server/enterprise niche use case? Maybe. But I'm sure that for 90% of people, including even those with a bit more demanding storage requirements, a PCIe 4 NVMe drive is still plenty in terms of throughput. At the same time SSD prices have been hovering around the same point for the past 3-4-5 years, and there hasn't been significant development in capacity - 8 TB models are still rare and disproportionately expensive, almost exotic. I personally would be much more excited to see a cool, efficient and reasonably priced 8/16 TB PCIe 4 drive than a pointlessly fast 1/2/4 TB PCIe 5.

[–] FooBarrington@lemmy.world 8 points 4 days ago (1 children)

I never understood this kind of objection. You yourself state that maybe 10% of users can find some good use for this - and that means that we should stop developing the technology until some arbitrary, higher threshold is met? 10% of users is an incredibly big amount! Why is that too little for this development to make sense?

[–] kamen@lemmy.world 4 points 4 days ago (1 children)

I'm not saying "don't make progress", I'm saying "try to make progress across the board".

[–] FooBarrington@lemmy.world 5 points 4 days ago

That's not how R&D works. It's really rare to have "progress across the board", usually you have incremental improvements in specific areas that come together to an across-the-board improvement.

So we'd be getting improvements slower since there's much less profit from individual advancements, as they can't be released. What's the advantage here?

[–] BlackLaZoR@fedia.io 2 points 4 days ago

How many IOPS?

[–] shortwavesurfer@lemmy.zip 2 points 4 days ago* (last edited 4 days ago) (6 children)

I wonder why they're not using TB/s like 14.9TB/s

Edit: GB/s

[–] pogodem0n@lemmy.world 13 points 4 days ago (1 children)

Because those are megabytes, not gigabytes

[–] shortwavesurfer@lemmy.zip 1 points 4 days ago

Oh good point. 14.9GB/s

[–] real_squids@sopuli.xyz 6 points 4 days ago

probably a holdover from the sata days, or simply because it's nice to show the number doubling into tens of thousands

[–] kamen@lemmy.world 5 points 4 days ago (1 children)

Assuming you meant GB/s, not TB/s, I think it's for the sake of convenience when doing comparisons - there are still SATA SSDs around and in terms of sequential reads and writes those top out at what the interface allows, i.e. 500-550 MB/s.

[–] shortwavesurfer@lemmy.zip 2 points 4 days ago

Yeah, i meant GB/s. Thanks for pointing that out.

[–] SharkAttak@kbin.melroy.org 2 points 4 days ago

Because bigger number better.

That's basically how all storage speeds are handled. HDDs are around 300MB/s, current NVMEs are around 7000MB/s, etc. Keep everything in the same scale for easier comparison.

[–] SchmidtGenetics@lemmy.world -2 points 4 days ago

So computer illiterate don’t think it’s a smaller number