restic restore --dry-run
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Trying to actually restore is the best way to ensure the backup works. But it's annoying so I never do it.
I usually trust restic to do it's job. Validating that files are there and are readable can be done with restic mount
, and you've mentioned restic check.
The best way to ensure your data is safe is to do a second backup with another tool. And keep your keys safe and accessible. A remote backup has no use of the keys burned down.
@Xanza's suggestion is a good one. For me, it's sufficient to fuse mount the backup and check a few files. It's not comprehensive, but if a few files I know changed look good, I figure they all probably are.
I was thinking about restoring the backup in a temporary location and running diff on random files to check the files match the source, but I don't know if this is redundant now.
That isn't as useful as you would think. If your computer fails there are high odds you will restore to a fresh install of a newer OS and newer software/services versions. Which means that you really want/need to also test data/config migration.
OTOH, if you have backups odds are the data is there even if you never tested them. Testing you can restore is mostly about do you have everything backed up. Your backups can pass all the validation but if you accidentally configured them to only backup /tmp (or something else worthless) you may as well not have backups. Thus you should test that you can do a full restore just to make sure that the data you want is all there. I generally trust that backup software can restore all the data you pointed it at without problems even if you didn't test them - but I don't trust that you (or I) configured them to backup the right things.
I use Borg but every now and then I mount a backup and download a few files to make sure they work correctly.
I've so far only had to do this for real with my local zfs snapshots after messing up a config file or blowing away the wrong folder. Process to restore is essentially the same except I would mount the Borg repo instead of a local zfs snapshot
I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose
. The --read-data-subset
also does the structural integrity while also checking an amount of data. If I had more bandwidth, I'd check more.
When I set up a new repo, I restore some stuff to make sure it's there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'
.
You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn't lazy I guess, just to make sure I'm not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.
Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'
.
I always use 2 different backup softwares for my 2 backups, but also do test restores.
Depends on what you're backing up. Is it configs for applications, images, video, etc? If it's application configs, you can set up those applications in a virtual machine and have a process run that starts the machine, restores the configs, and makes sure the applications start or whatever other tests you want. There are applications for doing that.
If it's images or videos, you can create a script to randomly pick a few, restore them, and check the integrity of the files. Usually just a check of the file header (first few bytes of the file) will tell you if it's an image or video type of file and maybe a check on the file size to make sure it's not an unreasonably small size, like a video that's only 100 bytes or something.
All this seems like overkill though in most scenarios.
Deja DUP has auto validation also. But besides "backup" I think everyone suggests using ZFS that auto heals bit rot. And don't trust unplugged SSDs, they can suffer bit rot quickly if stored in a hot location