ZFS. It runs on whatever RAM you give it.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Take the ZFS plunge - My only real concern is the overhead
you shouldn't worry about ZFS overhead if you are planning to use mergerfs.
you can tune the memory usage of ZFS significantly. by default it targets using half of your RAM, but on a home setup that's wasting resources, you should be able to limit the arc cache to 1-2 GB, maybe somewhat more depending on how you want to use it. It's done with sysctl or kernel parameters.
Today I have a huge bottleneck in unpacking and moving. I've got 1gb fiber and can saturate it, getting a complete iso in just a few minutes, but then it's another 30min plus waiting for that to actually be usable.
Are you doing this all manually or using the *arr suite? For me, this process takes a minute or two depending on the size of the files with Proxmox and ZFS but even previously on Windows 10 with SnapRAID it was quick.
Arr and Sab
For your second scenario - yes you can use md under bcache with no issues. It becomes more to configure but once set up has been solid. I actually do md/raid1 - luks - bcache - btrfs layers for the SSD cache disks, where the data drives just use luks - bcache - btrfs. Keep in mind that with bcache if you lose a cache disk you can’t mount - and of course if you’re doing write-back caching then the array is also lost. With write-through caching you can force disconnect the cache disk and mount the disks.
With write-back you’d only lose what was in cache right? Not the entire array?
Bcache can’t differentiate between data and metadata on the cache drive (it’s block level caching), so if something happens to a write-back cache device you lose data, and possibly the entire array. I wouldn’t use bcache (or zfs caching) without mirrored devices personally to ensure resiliency of the array. I don’t know if zfs is smarter - presumably is can be because it’s in control of the raw disks, I just didn’t want to deal with modules.