Backing up every few minutes with simplesnap

I’ve written a lot lately about ZFS, and one of its very nice features is the ability to make snapshots that are lightweight, space-efficient, and don’t hurt performance (unlike, say, LVM snapshots).

ZFS also has “zfs send” and “zfs receive” commands that can send the content of the snapshot, or a delta between two snapshots, as a data stream – similar in concept to an amped-up tar file. These can be used to, for instance, very efficiently send backups to another machine. Rather than having to stat() every single file on a filesystem as rsync has to, it sends effectively an intelligent binary delta — which is also intelligent about operations such as renames.

Since my last search for backup tools, I’d been using BackupPC for my personal systems. But since I switched them to ZFS on Linux, I’ve been wanting to try something better.

There are a lot of tools out there to take ZFS snapshots and send them to another machine, and I summarized them on my wiki. I found zfSnap to work well for taking and rotating snapshots, but I didn’t find anything that matched my criteria for sending them across the network. It seemed par for the course for these tools to think nothing of opening up full root access to a machine from others, whereas I would much rather lock it down with command= in authorized_keys.

So I wrote my own, called simplesnap. As usual, I wrote extensive documentation for it as well, even though it is very simple to set up and use.

So, with BackupPC, a backup of my workstation took almost 8 hours. (Its “incremental” might take as few as 3 hours) With ZFS snapshots and simplesnap, it takes 25 seconds. 25 seconds!

So right now, instead of backing up once a day, I back up once an hour. There’s no reason I couldn’t back up every 5 minutes, in fact. The data consumes less space, is far faster to manage, and doesn’t require a nightly hours-long cleanup process like BackupPC does — zfs destroy on a snapshot just takes a few seconds.

I use a pair of USB disks for backups, and rotate them to offsite storage periodically. They simply run ZFS atop dm-crypt (for security) and it works quite well even on those slow devices.

Although ZFS doesn’t do file-level dedup like BackupPC does, and the lz4 compression I’ve set ZFS to use is less efficient than the gzip-like compression BackupPC uses, still the backups are more space-efficient. I am not quite sure why, but I suspect it’s because there is a lot less metadata to keep track of, and perhaps also because BackupPC has to store a new copy of a file if even a byte changes, whereas ZFS can store just the changed blocks.

Incidentally, I’ve packaged both zfSnap and simplesnap for Debian and both are waiting in NEW.

14 thoughts on “Backing up every few minutes with simplesnap

  1. Hi,
    Thanks for your tool!
    I tried it and I found that, at the initial backup (first run) it failed, because the first dataset on activehost did not exist in the backuphost.
    Example:
    activehost, dataset required to backup: tank/home/user/Dropbox’
    The invocation on the backuphost should be this one:
    simplesnap –host activehost –setname mainset –store zbackup/backup –sshcmd “ssh -i /root/.ssh/id_rsa_simplesnap”

    The error message in the syslog:
    Apr 3 20:01:30 fp3pro01 simplesnap[9294]: Invoked as: /usr/local/sbin/simplesnap –host 192.168.0.114 –setname mainset –store zbackup/backup –sshcmd ssh -i /root/.ssh/id_rsa_simplesnap
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: Store zbackup/backup is mounted at /zbackup/backup
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: Running /sbin/zfs create zbackup/backup/192.168.0.114
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: /sbin/zfs exited successfully
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: Lock obtained at /zbackup/backup/192.168.0.114/.lock with dotlockfile
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: Finding remote datasets to back up
    Apr 3 20:01:31 fp3pro01 simplesnap[9294]: Running ssh -i /root/.ssh/id_rsa_simplesnap 192.168.0.114 simplesnapwrap listfs
    Apr 3 20:01:32 fp3pro01 simplesnap[9294]: ssh exited successfully
    Apr 3 20:01:32 fp3pro01 simplesnap[9294]: Running ssh -i /root/.ssh/id_rsa_simplesnap 192.168.0.114 simplesnapwrap sendback mainset tank/home/user/Dropbox
    Apr 3 20:01:32 fp3pro01 simplesnap[9294]: Running /sbin/zfs receive -F zbackup/backup/192.168.0.114/tank/home/user/Dropbox
    Apr 3 20:01:33 fp3pro01 simplesnap[9294//sbin/zfs]: cannot open ‘zbackup/backup/192.168.0.114/tank/home/user/Dropbox’: dataset does not exist
    Apr 3 20:01:33 fp3pro01 simplesnap[9294//sbin/zfs]: cannot receive new filesystem stream: dataset does not exist
    Apr 3 20:01:33 fp3pro01 simplesnap[9294]: /sbin/zfs exited with error 1

    I assumed the simplesnap will create the missing datasets on the backuphost, but as I checked the code, it will not?
    Am I correct?
    Thank you again!
    István

  2. John Goerzen says:

    It sounds as if you excluded tank/home/user on the remote side. Did you set the org.complete.simplesnap:exclude property there? If so, then it’s doing what you asked it to (you could zfs create zbackup/backup/192.168.0.114/tank, tank/home, tank/home/user, etc. on the local)

    1. Thanks for your quick answer!
      Correct, I excluded everything else :)
      Ok, in this case this was a user error. I tried to avoid the manual dataset creation on the backuphost.
      Anyway, as the final structure will be different, there will be no problem with it, but in this test system it will create 200GB vs. 2GB if I enable the user’s home, too :)
      Thanks!

  3. Stan says:

    John,

    I run Nas4free and want to send snapshots to a remote Freenas box and your description of your script looks enticing! I’d love to try it out. Would you please point me at it?

    Thanks,

    Stan

  4. fengchou says:

    I am testing simplesnap on two pairs of our server/backup hosts. Fro one pair of them it runs smoothly. For the other pair, sooner or later I will get err message 100. This server is used for testing purpose that we may add/remove zfs volumes frequently. The added/removed volumes may carry same names. Will simplesnap take care of this?

    1. John Goerzen says:

      You will need to look in your syslog to discover why you’re getting error 100. simplesnap automatically detects new datasets and creates them on the destination. It does not automatically remove backups of removed datasets, since this would defeat the purpose of a backup, but the disappearance of a dataset does not cause an error in itself. If you zfs destroy a dataset and then zfs create a new one with the same name, that will cause an error because it breaks the incremental send.

  5. Computer King says:

    Thanks for the scripts and some good docs, however where does it say how to automate this thing. Guessing i add one or more of the examples to cron.d/simplesnap?? also no commits for over 1 year is this project dead?? should i be using something else??

  6. rob fantini says:

    Hello Computer King
    this project is not dead , the proof is this
    from a Debian Jessie system:

    # aptitude show zfSnap
    Package: zfsnap zfSnap
    New: yes
    State: not installed
    Version: 1.11.1-3
    Priority: extra
    Section: admin
    Maintainer: John Goerzen
    Architecture: all
    Uncompressed Size: 49.2 k
    Depends: zfs-fuse | zfsutils | zfs, bc
    Description: Automatic snapshot creation and removal for ZFS
    zfSnap is a simple sh script to make rolling zfs snapshots with cron. The main advantage of zfSnap is it’s
    written in 100% pure /bin/sh so it doesn’t require any additional software to run.

    zfSnap keeps all information about snapshot in snapshot name.

    zfs snapshot names are in the format of Timestamp–TimeToLive.

    Timestamp includes the date and time when the snapshot was created and TimeToLive (TTL) is the amount of time
    for the snapshot to stay alive before it’s ready for deletion.
    Homepage: https://github.com/graudeejs/zfSnap

  7. Kevin says:

    When using the current source has this been tested with zfsnap
    2.0:

    https://github.com/zfsnap/zfsnap

    Or does simplesnap reqiure the legacy zfsnap?

    Thanks,
    Kevin

    1. John Goerzen says:

      Hi Kevin, simplesnap does not use zfsnap, nor does it require it. It can be convenient to pair the two and they work well together, but they are independent tools.

  8. Mourad says:

    Hi,
    I have a problème when i backuped one dataset on simplesnap, the other dataset was backuped, but just one tank/shares gire me this messages.
    I included him to backuped forlder “zfs set org.complete.simplesnap:exclude=off tank/shares” and had the same logs.

    This is the output log in the backupserver :

    Jan 13 08:59:16 host-1 simplesnap[24055//sbin/zfs]: cannot receive incremental stream: destination ‘tank/data/10.0.0.12/tank/shares’ does not exist
    Jan 13 08:59:16 host-1 simplesnap[24055]: /sbin/zfs exited with error 1
    Jan 13 08:59:16 host-1 simplesnap[24055/ssh]: warning: cannot send ‘tank/shares@__simplesnap_mainset_2016-12-29T16:16:36__’: Relais brisé (pipe)
    Jan 13 08:59:16 host-1 simplesnap[24055/ssh]: warning: cannot send ‘tank/shares@__simplesnap_mainset_2016-12-29T16:17:26__’: Relais brisé (pipe)
    Jan 13 08:59:16 host-1 simplesnap[24055/ssh]: warning: cannot send ‘tank/shares@2016-12-30_05.30.01–1m’: Relais brisé (pipe)
    Jan 13 08:59:16 host-1 simplesnap[24055/ssh]: warning: cannot send ‘tank/shares@2016-12-31_05.30.01–1m’: Relais brisé (pipe)
    Jan 13 08:59:17 host-1 simplesnap[24055]: ssh exited with error 141
    Jan 13 08:59:17 host-1 simplesnap[24055]: zfs receive died with error: 1

    Please help me.
    KIng regardes

  9. Joel Franco says:

    Hello,

    Thanks for the excellent tool.

    I want to run zfsnap just in the backuphost and thus not in the activehost because it don’t make sense to waste space in both hosts.

    The idea is to run simplesnap to do the synchronization between the active and backuphost and use zfsnap to do snaphosts just of the backup folders in the backuphost.

    Is this possible?

    Thank you.

    Joel

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.