this post was submitted on 21 Aug 2025
1308 points (95.5% liked)

memes

16974 readers
3291 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/Ads/AI SlopNo advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] grue@lemmy.world 5 points 3 days ago (1 children)

I have a Proxmox server with a random assortment of hard drives and SSDs of various capacities {8TB, 2TB, 2TB, 240GB, 240GB}. I want to create a CephFS filesystem spanning them, using erasure-coded pools in order to maximize capacity (kind of like RAID 5 except without requiring same-sized drives). How do I configure my CRUSH Map in order to accomplish this?

[–] moseschrute@lemmy.world 3 points 3 days ago (1 children)

Lol, you lost me there. I've read up on the various RAID configurations. I've heard about CephFS. I don't know much about it, but I get the sense it's the new kid on the block.

I actually have a RAID question for you. I want to setup a little RAID array starting with 2 mirrored drives and add more drives later. But it seems there is no easy way to migrate RAID versions? Let's say I want to start with 2, then 3, than 4 drives as stuff fills up. I always want some level of redundancy. And I don't want to use any additional drives aside from the 2, 3, then 4 in the array. Is this possible? Either with RAID or with CephFS?

[–] grue@lemmy.world 1 points 3 days ago (1 children)

Funny you should mention that, because it's what got me thinking about Ceph in the first place. My other Proxmox node has a 2-drive mirrored ZFS pool, and I went to add a third drive to it and realized that I'd have to move all the data off and rebuild it from scratch, so I started looking for other solutions.

So yeah, I think Ceph can add to an array after-the-fact like that (in addition to the not-waste-capacity-of-random-assorted-disks thing), but I haven't figured it out enough yet to be sure.

[–] moseschrute@lemmy.world 1 points 3 days ago

I also totally was mixing up Ceph with ZFS. Linux tech mentions ZFS a lot. That’s the source of most of my RAID knowledge lol