483749 minutes by my math, or just under a year.
(the gory part is that the source drive is only 256GB with 10% used)
tar-ing them beforehand might be faster.
I tried this exact application and I feel like I learned a lot about backups in the process. I like to think about it like this: (info dump)
There are basically these three backups:
block level, filesystem level, file level backups.
The problem with a file level backup is that when you have many files (as opposed to a few large files) the backup will take much much longer. Backups need to be able to detect diffs and only backup what is needed to be efficient. Avoiding RSYNC like applications for backup whenever you can is ideal.
Block level and filesystem level backups are always faster. I understand this may not be suitable for everyone but I always expect whichever backup applications I choose to create a file in my backup destination that is an archive filled with immutable snapshots of the filesystem I am backing up. I almost always now choose my filesystem based on my backup strategy . Here are some examples of filesystems and the best ways to back them up.
NTFS (windows): Your backups should be done via volume shadow copy provider. Almost all mainstream backup software use this method for backups (Acronis, Veeam, built in windows Etc).
Ext4/XFS: there are a few different ways to peel this banana. You can use LVM to help you create your snapshots. Some backup applications will install a block filesystem backup driver to perform your backups. On Linux based NAS like QNAP, often they will have a backup application built in that will let you ship snapshots to another data location. This is ideal as then you don’t need a third party application to perform your backups and your backups will be short and frequent.
ZFS: I love ZFS (I am biased).The key is to do this with snapshots as well. Have another ZFS server somewhere and ship your snapshots to it with Zsnapsend or SANOID (a Zsnapsend policy based backup). Backing up ZFS to cloud providers is less than ideal and usually serves better as a backups destination. Backing up data inside of ZFS datasets ends up being an RSYNC like process if you can’t touch the snapshots with your backup software. At that point I would suggest using something that has a backup driver inside of it. I’ve in the past run a LXC container on the host and installed the Acronis agent with its block driver in the container and mounted by datasets as locations inside the container to get around this.
Virtualization: always grab the backup from the Host system. A lot of times this means you can lift and shift multiple VMs at greater speed this way. The filesystem won’t matter at this point (unless you’re using VM ware tools to perform a VSS snapshot before each backup in your VM before each snapshot).
Use Changed Block Tracking whenever you can.
I hope this helps you or someone else reading this. Thanks.
Edit: Just realized it says 128TB in the picture and not GB. You’re gonna need the Lord’s help. That or a 10Gb link to another seriously fast NAS on your network that will let you sustain writes above 1GBs. That would only take about a day.
Good god, that’s huge. You need to find a backup solution that dedups.
I keep 40 versions of each of my dozens of proxmox guest VMs, and it doesn’t use 10% of that much space, including a virtualized NAS. I think it has a dedup ratio of 30x.
I’m realizing reading the comments that not everyone noticed the description. While I easily have over 100TB on other systems, the gory part here is that the source drive is just a small laptop SSD with maybe 25GB used. Duplicati is buggy, that’s all.
Checkout Kopia…
Duplicati is really nice in many ways but when it comes to restore time it takes forever. If you ever need to actually use this backup Duplicati will definitely fail you.
I’m not saying Kopia won’t, but I think it’s got a way better chance of working in a timely fashion.
Edit: Just noticed where I am and the post body. Yeah, duplicati unfortunately isn’t in a great maintenance state and has bugs and performance issues despite otherwise being a nice user experience.
Duplicati is kinda terrible. I used it for about two years as my main backup, it was very unreliable and slow
Agreed. I ended up going with Timeshift on this machine and it seems solid enough for my purposes.
I ended up with Borg with Vorta GUI, it’s fantastic.
I’ll check it out, thanks.
So, do I need to back up /proc? Borg fails otherwise, seemingly because it thinks there’s a 128TB file in there.
No, I think /proc contains currently running processes. It’s best to back up your home directory, leave the system stuff to timeshift.
Duplicacy is a breath of fresh air compare to Duplicati. I never felt like I could trust Duplicati.
deleted by creator
deleted by creator