enumerator4829

joined 1 month ago
[–] enumerator4829@sh.itjust.works 1 points 20 hours ago

Exactly, if you need shelf life, you use tape. Shelf life isn’t really a consideration for hard drives or SSDs in real life scenarios.

See for example the storage systems from Vast or Pure. You can increase window size for compression and dedup far smaller blocks. Fast random IO also allows you to do that ”online” in the background. In the case of Vast, you also have multiple readers on the same SSD doing that compression and dedup.

So the feature isn’t that special. What you can do with it in practice changes drastically.

[–] enumerator4829@sh.itjust.works 1 points 2 days ago (2 children)

The flaw with hard drives comes with large pools. The recovery speed is simply too slow when a drive fails, unless you build huge pools. So you need additional drives for more parity.

I don’t know who cares about shelf life. Drives spin all their lives, which is 5-10 years. Use M-Disk or something if you want shelf life.

[–] enumerator4829@sh.itjust.works 2 points 3 days ago (2 children)

I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.

I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.

That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.

Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.

Not economical. Storage is already done on far larger fab nodes than CPUs and other components. This is a case where higher density actually can be cheaper. ”Mature” nodes are most likely cheaper than the ”ancient” process nodes simply due to age and efficiency. (See also the disaster in the auto industry during covid. Car makers stopped ordering parts made on ancient process nodes, so the nodes were shut down permanently due to cost. After covid, fun times for automakers that had to modernise.)

Go compare prices, new NVMe M.2 will most likely be cheaper than SATA 2.5” per TB. The extra plastic shell, extra shipping volume and SATA-controller is that difference. 3.5” would make it even worse. In the datacenter, we are moving towards ”rulers” with 61TB available now, probably 120TB soon. Now, these are expensive, but the cost per TB is actually not that horrible when compared to consumer drives.

[–] enumerator4829@sh.itjust.works 5 points 3 days ago (8 children)

Tape will survive, SSDs will survive. Spinning rust will die

[–] enumerator4829@sh.itjust.works 12 points 3 days ago (3 children)

Nope. Larger chips, lower yields in the fab, more expensive. This is why we have chiplets in our CPUs nowadays. Production cost of chips is superlinear to size.

[–] enumerator4829@sh.itjust.works 6 points 3 days ago (6 children)

It’s not the packaging that costs money or limits us, it’s the chips themselves. If we crammed a 3.5” form factor full of flash storage, it would be far outside the budgets of mortals.

[–] enumerator4829@sh.itjust.works 10 points 3 days ago (8 children)

Why? We can cram 61TB into a slightly overgrown 2.5” and like half a PB per rack unit.

Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.

Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.

At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.

I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…

Just going off the marketing here:

Git server with CI/CD, kanban, and packages.

From the looks of it, they also seem to bundle the vscode server and a bunch of other stuff. I’m actually kinda surprised they do it with only 1G of RAM.

[–] enumerator4829@sh.itjust.works 3 points 1 week ago (2 children)

Not to be that guy, but 12% of 8G isn’t even close to ”heavy as fuck” for a CI/CD and collaboration suite that seems aimed at enterprise users.

You can also tweak how much memory you’d like the jvm to grab with ’-Xms100m’. Any defaults are most likely aimed at much larger deployments than yours.

But yes, Java is a disease.

view more: next ›