this post was submitted on 19 Mar 2025
50 points (91.7% liked)

Linux

52213 readers
547 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I recently implemented a backup workflow for me. I heavily use restic for desktop backup and for a full system backup of my local server. It works amazingly good. I always have a versioned backup without a lot of redundant data. It is fast, encrypted and compressed.

But I wondered, how do you guys do your backups? What software do you use? How often do you do them and what workflow do you use for it?

top 50 comments
sorted by: hot top controversial new old
[–] isgleas@lemmy.ml 10 points 4 days ago (1 children)
[–] thenextguy@lemmy.world 2 points 4 days ago
[–] zeca@lemmy.eco.br 7 points 4 days ago

i do backups of my home folder with Vorta, tha uses borg in the backend. I never tried restic, but borg is the first incremental backup utility i tried that doesnt increase the backup size when i move or rename a file. I was using backintime before to backup 500gb on a 750gb drive and if I moved 300gb to a different folder, it would try to copy those 300gb again onto the backup drive and fail for lack of storage, while borg handles it beautifully.

as an offsite solution, i use syncthing to mirror my files to a pc at my fathers house that is turned on just once in a while to save power and disc longevity.

[–] Strit@lemmy.linuxuserspace.show 7 points 4 days ago (1 children)

My systems are all on btrfs, so I make use of subvolumes and use brkbk to backup snapshots to other locations.

[–] a14o@feddit.org 2 points 4 days ago

Same! This works really well.

[–] poinck@lemm.ee 3 points 4 days ago (1 children)

This looks a bit like borgbackup. It is also versioned and stores everything deduplicated, supports encryption and can be mounted using fuse.

[–] Zenlix@lemm.ee 5 points 4 days ago (2 children)

Thanks for your hint towards borgbackup.

After reading the Quick Start of Borg Backup they look very similar. But as far as I can tell, borg can be encrypted and compressed while restic is always. You can mounting your backups in restic to. It also seems that restic supports more repository locations such as several cloud storages and via a special http server.

I also noticed that borg is mainly written in python while restic is written in go. That said I assume that restic is a bit faster based on the language (I have not tested that).

[–] drspod@lemmy.ml 2 points 4 days ago

It was a while ago that I compared them so this may have changed, but one of the main differences that I saw was that borg had to backup over ssh, while restic had a storage backend for many different storage methods and APIs.

[–] ferric_carcinization@lemmy.ml 1 points 4 days ago

I haven't used either, much less benchmarked them, but the performance differences should be negligible due to the IO-bound nature of the work. Even with compression & encryption, it's likely that either the language is fast enough or that it's implemented in a fast language.

Still, I wouldn't call the choice of language insignificant. IIRC, Go is strongly typed while Python isn't. Even if type errors are rare, I would rather trust software written to be immune to them. (Same with memory safety, but both languages use garbage collection, so it's not really relevant in this case.)

Of course, I could be wrong. Maybe one of the tools cannot fully utilize the network or disk. Perhaps one of them uses multithreaded compression while the other doesn't. Architectual decisions made early on could also cause performance problems. I'd just rather not assume any noticeable performance differences caused by the programming language used in this case.

Sorry for the rant, this ended up being a little longer than I expected.

Also, Rust rewrite when? :P

[–] data1701d@startrek.website 3 points 4 days ago

Borg Backup, whenever I feel like it - usually monthly.

[–] blade_barrier@lemmy.ml 3 points 4 days ago

Since most of the machines I need to backup are VMs, I do it by the means of hypervisor. I'd use borg scheduled in crontab for physical ones.

[–] Vintor@lemm.ee 3 points 4 days ago* (last edited 4 days ago) (3 children)

I've found that the easiest and most effective way to backup is with an rsync cron job. It's super easy to setup (I had no prior experience with either rsync or cron and it took me 10 minutes) and to configure. The only drawback is that it doesn't create differential backups, but the full task takes less than a minute every day so I don't consider that a problem. But do note that I only backup my home folder, not the full system.

For reference, this is the full line I use: sync -rau --delete --exclude-from='/home//.rsync-exclude' /home/ /mnt/Data/Safety/rsync-myhome

".rsync-exclude" is a file that lists all files and directories I don't want to backup, such as temp or cache folders.

(Edit: two stupid errors.)

[–] dihutenosa@lemm.ee 4 points 4 days ago

Rsync can do incremental backups with a command-line switch and some symlink jugglery. I'm using it to back up my self-hosted stuff.

[–] atzanteol@sh.itjust.works 3 points 4 days ago

You might be interested in "rsnapshot" which uses rsync and manages daily, monthly, etc. snapshots.

[–] everett@lemmy.ml 2 points 4 days ago (1 children)

only drawback is that it doesn't create differential backups

This is a big drawback because even if you don't need to keep old versions of files, you could be replicating silent disk corruption to your backup.

[–] suicidaleggroll@lemm.ee 2 points 4 days ago* (last edited 4 days ago) (3 children)

It’s not a drawback because rsync has supported incremental versioned backups for over a decade, you just have to use the --link-dest flag and add a couple lines of code around it for management.

load more comments (3 replies)
[–] hallettj@leminal.space 3 points 4 days ago (1 children)

My conclusion after researching this a while ago is that the good options are Borg and Restic. Both give you incremental backups with cheap timewise snapshots. They are quite similar to each other, and I don't know of a compelling reason to pick one over the other.

[–] Zenlix@lemm.ee 2 points 4 days ago

As far as I know, by definition, at least restic is not incremental. It is a mix of full backup and incremental backup.

[–] hamburger@discuss.tchncs.de 2 points 3 days ago
  • Offline Backup on 2 separate HDD/SSD
  • Backup on HDD within my desktop pc
  • Backup offsite with restic to Hetzner Storage Box
[–] Gieselbrecht@feddit.org 2 points 4 days ago (3 children)

I'm curious, is there a reason why noone uses deja-dup? I use it with an external SSD on Ubuntu and (receently) Mint, where it comes pre-installed, and did not encounter Problems.

[–] mazzilius_marsti@lemmy.world 1 points 4 days ago (1 children)

What do you backup with dejadup? Everything under /home?

[–] Gieselbrecht@feddit.org 1 points 3 days ago

Mostly, with some folders excepted (e.g. my Nextcloud folder)

load more comments (2 replies)
[–] beeng@discuss.tchncs.de 2 points 4 days ago* (last edited 4 days ago)

Borg to a NAS.

500GB of that NAS is "special" so I then rsync that to a 500GB old laptop hdd, of which is is duplicated again to another 500GB old laptop hdd.

Same 500GB rsync'd to Cloud Server.

[–] suicidaleggroll@lemm.ee 2 points 4 days ago* (last edited 4 days ago)

My KVM hosts use “virsh backup begin” to make full backups nightly.

All machines, including the KVM hosts and laptops, use rsync with --link-dest to create daily incremental versioned backups on my main backup server.

The main backup server pushes client-side encrypted backups which include the latest daily snapshot for every system to rsync.net via Borg.

I also have 2 DASs with 2 22TB encrypted drives in each. One of these is plugged into the backup server while the other one sits powered off in a drawer in my desk at work. The main backup server pushes all backups to this DAS weekly and I swap the two DASs ~monthly so the one in my desk at work is never more than a month or so out of date.

[–] MentalEdge@sopuli.xyz 2 points 4 days ago

I recently switched to Kopia for my offsite backup solution.

It's apparently one of the faster options, and it can be set up so that the files of the differential backups are handled by a repository server on the offsite end, so file management doesn't need to happen over the network at a snails pace.

The result is a way to maintain frequent full backups of my nextcloud instance, with almost no downtime.

Nextcloud only goes into maintenance mode for the duration of a postgres database dump, after which the actual file system backup occurs using a temporary btrfs snapshot, containing a frozen filesystem at the time of the database dump.

[–] privateX@lemmy.world 2 points 4 days ago

I keep all of my documents on a local server so all that is on any of my computers is software. So if I need to reinstall Linux I cab just do it without wording about losing anything.

[–] Earflap@reddthat.com 2 points 4 days ago

I have a server with a RAID-1 array, that makes daily, weekly, and monthly read only btrfs snapshots. The whole thing (sans snapshots) is sync'd with syncthing to two rPi's in two different geographic locations.

I know neither raid nor syncthing are "real" backup solutions, but with so many copies of the files living in so many locations (in addition to my phone, laptop, etc.) I'm reasonably confident its a decent solution.

[–] ColdWater@lemmy.ca 2 points 3 days ago (1 children)

I use external drive for my important data and if my system is borked (which never happen to me) I just reinstall the OS

[–] floquant@lemmy.dbzer0.com 2 points 3 days ago

External drives are more prone to damage and failures, both because they're more likely to be dropped/bumped/spilled on etc, and because of generally cheaper construction compared to internal drives. In the case of SSDs the difference might be negligible, but I suggest you at least make a copy on another "cold" external drive if the data is actually important

[–] haque@lemm.ee 1 points 3 days ago

I use Duplicacy to backup to my TrueNAS server. Crucial data like documents are backed up a second time to my GDrive, also using Duplicacy. Sadly it's a paid solution, but it works great for me.

[–] tankplanker@lemmy.world 1 points 3 days ago

Borg daily to the local drive then copied across to a USB drive, then weekly to cloud storage. Script is triggered by daily runs of topgrade before I do any updates

[–] bitcrafter@programming.dev 1 points 3 days ago

I created a script that I dropped into /etc/cron.hourly which does the following:

  1. Use rsync to mirror my root partition to a btrfs partition on another hard drive (which only updates modified files).
  2. Use btrfs subvolume snapshot to create a snapshot of that mirror (which only uses additional storage for modified files).
  3. Moves "old" snapshots into a trash directory so I can delete them later if I want to save space.

It is as follows:

#!/usr/bin/env python
from datetime import datetime, timedelta
import os
import pathlib
import shutil
import subprocess
import sys

import portalocker

DATETIME_FORMAT = '%Y-%m-%d-%H%M'
BACKUP_DIRECTORY = pathlib.Path('/backups/internal')
MIRROR_DIRECTORY = BACKUP_DIRECTORY / 'mirror'
SNAPSHOT_DIRECTORY = BACKUP_DIRECTORY / 'snapshots'
TRASH_DIRECTORY = BACKUP_DIRECTORY / 'trash'

EXCLUDED = [
    '/backups',
    '/dev',
    '/media',
    '/lost+found',
    '/mnt',
    '/nix',
    '/proc',
    '/run',
    '/sys',
    '/tmp',
    '/var',

    '/home/*/.cache',
    '/home/*/.local/share/flatpak',
    '/home/*/.local/share/Trash',
    '/home/*/.steam',
    '/home/*/Downloads',
    '/home/*/Trash',
]

OPTIONS = [
    '-avAXH',
    '--delete',
    '--delete-excluded',
    '--numeric-ids',
    '--relative',
    '--progress',
]

def execute(command, *options):
    print('>', command, *options)
    subprocess.run((command,) + options).check_returncode()

execute(
    '/usr/bin/mount',
    '-o', 'rw,remount',
    BACKUP_DIRECTORY,
)

try:
    with portalocker.Lock(os.path.join(BACKUP_DIRECTORY,'lock')):
        execute(
            '/usr/bin/rsync',
            '/',
            MIRROR_DIRECTORY,
            *(
                OPTIONS
                +
                [f'--exclude={excluded_path}' for excluded_path in EXCLUDED]
            )
        )

        execute(
            '/usr/bin/btrfs',
            'subvolume',
            'snapshot',
            '-r',
            MIRROR_DIRECTORY,
            SNAPSHOT_DIRECTORY / datetime.now().strftime(DATETIME_FORMAT),
        )

        snapshot_datetimes = sorted(
            (
                datetime.strptime(filename, DATETIME_FORMAT)
                for filename in os.listdir(SNAPSHOT_DIRECTORY)
            ),
        )

        # Keep the last 24 hours of snapshot_datetimes
        one_day_ago = datetime.now() - timedelta(days=1)
        while snapshot_datetimes and snapshot_datetimes[-1] >= one_day_ago:
            snapshot_datetimes.pop()

        # Helper function for selecting all of the snapshot_datetimes for a given day/month
        def prune_all_with(get_metric):
            this = get_metric(snapshot_datetimes[-1])
            snapshot_datetimes.pop()
            while snapshot_datetimes and get_metric(snapshot_datetimes[-1]) == this:
                snapshot = SNAPSHOT_DIRECTORY / snapshot_datetimes[-1].strftime(DATETIME_FORMAT)
                snapshot_datetimes.pop()
                execute('/usr/bin/btrfs', 'property', 'set', '-ts', snapshot, 'ro', 'false')
                shutil.move(snapshot, TRASH_DIRECTORY)

        # Keep daily snapshot_datetimes for the last month
        last_daily_to_keep = datetime.now().date() - timedelta(days=30)
        while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_daily_to_keep:
            prune_all_with(lambda x: x.date())

        # Keep weekly snapshot_datetimes for the last three month
        last_weekly_to_keep = datetime.now().date() - timedelta(days=90)
        while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_weekly_to_keep:
            prune_all_with(lambda x: x.date().isocalendar().week)

        # Keep monthly snapshot_datetimes forever
        while snapshot_datetimes:
            prune_all_with(lambda x: x.date().month)
except portalocker.AlreadyLocked:
    sys.exit('Backup already in progress.')
finally:
    execute(
        '/usr/bin/mount',
        '-o', 'ro,remount',
        BACKUP_DIRECTORY,
    )
[–] heythatsprettygood@feddit.uk 1 points 2 days ago

I use Pika Backup (GUI that uses Borg Backup on the backend) to back up my desktop to my home server daily, then overnight that server has a daily backup using Borg to a Hetzner Storage Box. It's easy to set it and forget it (other than maybe verifying the backups every once in a while), and having that off site back up gives me peace of mind.

[–] anamethatisnt@sopuli.xyz 1 points 4 days ago

Most of my data are on 2x16TB HDDs running an mdraid1 and then I backup it all to a usb drive with Borg Backup.
The os.qcow2 files live on my m.2 NVMe and are manually backuped to the mdraid1 before running the borg backup.
I should automate the borg backup but currently I just do it manually a few times a month.
Would also like to have two usb drives and keep one offline in another part of the house but that's another future project.

[–] savvywolf@pawb.social 1 points 4 days ago (1 children)

I recently bought a storagebox from Hatzner and set up my server to run borgmatic every day to backup to it.

I've also discovered that Pika Backup works really well as a "read only" graphical browser for borg repos.

[–] ouch@lemmy.world 1 points 7 hours ago (1 children)

Do you use some kind of encryption on the VPS?

[–] savvywolf@pawb.social 1 points 2 hours ago

Yep, borgmatic encrypts it before it sends data to the server.

[–] BlackEco@lemmy.blackeco.com 1 points 4 days ago* (last edited 4 days ago) (1 children)

My work flow is pretty similar to yours:

For my desktop and laptops: systemd timer and service that backups every 15 minutes using restic to my NAS.

For my NAS : daily backup using restic + ZFS snapshots.

All restic backups are then uploaded daily to Backblaze B2.

[–] ItTakesTwo@feddit.org 1 points 4 days ago (2 children)

Do you create ZFS snapshots and let those be backed up to B2 via restic or do you backup different types of data, one with ZFS snapshots and one with restic?

load more comments (2 replies)
[–] Mio@feddit.nu 1 points 4 days ago

Timeshift for snapshots and deja backups for files

[–] mybuttnolie@sopuli.xyz 1 points 4 days ago

Nice try, mister ransonware attacker hacker!

[–] melfie@lemmings.world 1 points 4 days ago

I currently use rclone with encryption to iDrive e2. I’m considering switching to Backrest, though.

I originally tried Backblaze b2, but exceeded their API quotas in their free tier and iDrive has “free” API calls, so I recently bought a year’s worth. I still have a 2 year Proton subscription and tried rclone with Proton drive, but it was too slow.

[–] djsaskdja@reddthat.com 1 points 4 days ago

There’s nothing saved on my system I couldn’t afford to lose. All my work stuff is saved in Google Drive for better or worse. I have a few small files in a personal Proton Drive that I backup manually. I wipe my own system a few times a year and I rarely ever save anything first. Honestly very refreshing to live your life like that. Other than my cat, pretty much all my possessions could disappear tomorrow and I’d get over it pretty quickly.

[–] rutrum@programming.dev 1 points 4 days ago

I use borg the same way you describe. Part of my nixos config builds a systemd unit that starts a backup on various directories on my machine at midnight every day. I have 2 repos: one to store locally and on a cloud backup provider (borgbase) and another thats just stored locally. That is, another computer in my house. That local only is for all my home media. I havent yet put the large dataset of photos and videos on the cloud or offsite.

[–] Pika@sh.itjust.works 1 points 4 days ago

for my server I use proxmox backup server to an external HDD for my containers, and I back up media monthly to an encrypted cold drive.

For my desktop? I use a mix of syncthing (which goes to the server) and windows file history(if I logged into the windows partition) and I want to get timeshift working I just have so much data that it's hard to manage so currently I'll just shed some tears if my Linux system fails

[–] doubtingtammy@lemmy.ml 1 points 4 days ago
[–] bubbalouie@lemmy.ml 1 points 4 days ago

I rsync ~/ to a USB nub. A no brainer.

load more comments