Tag Archives: backups

Fast, Ordered Unixy Queues over NNCP and Syncthing with Filespooler

It seems that lately I’ve written several shell implementations of a simple queue that enforces ordered execution of jobs that may arrive out of order. After writing this for the nth time in bash, I decided it was time to do it properly. But first, a word on the why of it all.

Why did I bother?

My needs arose primarily from handling Backups over Asynchronous Communication methods – in this case, NNCP. When backups contain incrementals that are unpacked on the destination, they must be applied in the correct order.

In some cases, like ZFS, the receiving side will detect an out-of-order backup file and exit with an error. In those cases, processing in random order is acceptable but can be slow if, say, hundreds or thousands of hourly backups have stacked up over a period of time. The same goes for using gitsync-nncp to synchronize git repositories. In both cases, a best effort based on creation date is sufficient to produce a significant performance improvement.

With other cases, such as tar or dar backups, the receiving cannot detect out of order incrementals. In those situations, the incrementals absolutely must be applied with strict ordering. There are many other situations that arise with these needs also. Filespooler is the answer to these.

Existing Work

Before writing my own program, I of course looked at what was out there already. I looked at celeary, gearman, nq, rq, cctools work queue, ts/tsp (task spooler), filequeue, dramatiq, GNU parallel, and so forth.

Unfortunately, none of these met my needs at all. They all tended to have properties like:

  • An extremely complicated client/server system that was incompatible with piping data over existing asynchronous tools
  • A large bias to processing of small web requests, resulting in terrible inefficiency or outright incompatibility with jobs in the TB range
  • An inability to enforce strict ordering of jobs, especially if they arrive in a different order from how they were queued

Many also lacked some nice-to-haves that I implemented for Filespooler:

  • Support for the encryption and cryptographic authentication of jobs, including metadata
  • First-class support for arbitrary compressors
  • Ability to use both stream transports (pipes) and filesystem-like transports (eg, rclone mount, S3, Syncthing, or Dropbox)

Introducing Filespooler

Filespooler is a tool in the Unix tradition: that is, do one thing well, and integrate nicely with other tools using the fundamental Unix building blocks of files and pipes. Filespooler itself doesn’t provide transport for jobs, but instead is designed to cooperate extremely easily with transports that can be written to as a filesystem or piped to – which is to say, almost anything of interest.

Filespooler is written in Rust and has an extensive Filespooler Reference as well as many tutorials on its homepage. To give you a few examples, here are some links:

Basics of How it Works

Filespooler is intentionally simple:

  • The sender maintains a sequence file that includes a number for the next job packet to be created.
  • The receiver also maintains a sequence file that includes a number for the next job to be processed.
  • fspl prepare creates a Filespooler job packet and emits it to stdout. It includes a small header (<100 bytes in most cases) that includes the sequence number, creation timestamp, and some other useful metadata.
  • You get to transport this job packet to the receiver in any of many simple ways, which may or may not involve Filespooler’s assistance.
  • On the receiver, Filespooler (when running in the default strict ordering mode) will simply look at the sequence file and process jobs in incremental order until it runs out of jobs to process.

The name of job files on-disk matches a pattern for identification, but the content of them is not significant; only the header matters.

You can send job data in three ways:

  1. By piping it to fspl prepare
  2. By setting certain environment variables when calling fspl prepare
  3. By passing additional command-line arguments to fspl prepare, which can optionally be passed to the processing command at the receiver.

Data piped in is added to the job “payload”, while environment variables and command-line parameters are encoded in the header.

Basic usage

Here I will excerpt part of the Using Filespooler over Syncthing tutorial; consult it for further detail. As a bit of background, Syncthing is a FLOSS decentralized directory synchronization tool akin to Dropbox (but with a much richer feature set in many ways).

Preparation

First, on the receiver, you create the queue (passing the directory name to -q):

sender$ fspl queue-init -q ~/sync/b64queue

Now, we can send a job like this:

sender$ echo Hi | fspl prepare -s ~/b64seq -i - | fspl queue-write -q ~/sync/b64queue

Let’s break that down:

  • First, we pipe “Hi” to fspl prepare.
  • fspl prepare takes two parameters:
    • -s seqfile gives the path to a sequence file used on the sender side. This file has a simple number in it that increments a unique counter for every generated job file. It is matched with the nextseq file within the queue to make sure that the receiver processes jobs in the correct order. It MUST be separate from the file that is in the queue and should NOT be placed within the queue. There is no need to sync this file, and it would be ideal to not sync it.
    • The -i option tells fspl prepare to read a file for the packet payload. -i - tells it to read stdin for this purpose. So, the payload will consist of three bytes: “Hi\n” (that is, including the terminating newline that echo wrote)
  • Now, fspl prepare writes the packet to its stdout. We pipe that into fspl queue-write:
    • fspl queue-write reads stdin and writes it to a file in the queue directory in a safe manner. The file will ultimately match the fspl-*.fspl pattern and have a random string in the middle.

At this point, wait a few seconds (or however long it takes) for the queue files to be synced over to the recipient.

On the receiver, we can see if any jobs have arrived yet:

receiver$ fspl queue-ls -q ~/sync/b64queue
ID                   creation timestamp          filename
1                    2022-05-16T20:29:32-05:00   fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl

Let’s say we’d like some information about the job. Try this:

receiver$ $ fspl queue-info -q ~/sync/b64queue -j 1
FSPL_SEQ=1
FSPL_CTIME_SECS=1652940172
FSPL_CTIME_NANOS=94106744
FSPL_CTIME_RFC3339_UTC=2022-05-17T01:29:32Z
FSPL_CTIME_RFC3339_LOCAL=2022-05-16T20:29:32-05:00
FSPL_JOB_FILENAME=fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl
FSPL_JOB_QUEUEDIR=/home/jgoerzen/sync/b64queue
FSPL_JOB_FULLPATH=/home/jgoerzen/sync/b64queue/jobs/fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl

This information is intentionally emitted in a format convenient for parsing.

Now let’s run the job!

receiver$ fspl queue-process -q ~/sync/b64queue --allow-job-params base64
SGkK

There are two new parameters here:

  • --allow-job-params says that the sender is trusted to supply additional parameters for the command we will be running.
  • base64 is the name of the command that we will run for every job. It will:
    • Have environment variables set as we just saw in queue-info
    • Have the text we previously prepared – “Hi\n” – piped to it

By default, fspl queue-process doesn’t do anything special with the output; see Handling Filespooler Command Output for details on other options. So, the base64-encoded version of our string is “SGkK”. We successfully sent a packet using Syncthing as a transport mechanism!

At this point, if you do a fspl queue-ls again, you’ll see the queue is empty. By default, fspl queue-process deletes jobs that have been successfully processed.

For more

See the Filespooler homepage.


This blog post is also available as a permanent, periodically-updated page.

A Simple, Delay-Tolerant, Offline-Capable Mesh Network with Syncthing (+ optional NNCP)

A little while back, I spent a week in a remote area. It had no Internet and no cell phone coverage. Sometimes, I would drive in to town where there was a signal to get messages, upload photos, and so forth. I had to take several devices with me: my phone, my wife’s, maybe a laptop or a tablet too. It seemed there should have been a better way. And there is.

I’ll use this example to talk about a mesh network, but it could just as well apply to people wanting to communicate on a 12-hour flight that has no in-flight wifi, or spacecraft with an intermittent connection, or a person traveling.

Syncthing makes a wonderful solution for things like these. Here are some interesting things about Syncthing:

  • You can think of Syncthing as a serverless, peer-to-peer, open source alternative to Dropbox. Machines sync directly with each other without a server, though you can add a server if you want.
  • It can operate completely without Internet access or any central server, though if Internet access is available, it can readily be used.
  • Syncthing devices connected to the same LAN or Wifi will detect each other’s presence and automatically communicate.
  • Syncthing is capable of handling a constantly-changing topology. It can also, for instance, handle two disconnected clusters of nodes with one node that “travels” between them — perhaps just a phone.
  • Syncthing scales from everything from a phone to thousands of nodes.
  • Syncthing normally performs syncs in every direction, but can also do single-direction syncs
  • An individual Syncthing node can register its interest or disinterest in certain files or directories based on filename patterns

Syncthing works by having you define devices and folders. You can choose which devices to share folders with. A shared folder has an ID that is unique across Sycnthing. You can share a folder from device A to device B, and then device B can share it with device C, even if A and C don’t know about each other or have no way to communicate. More commonly, though, all the devices would know about each other and will opportunistically communicate the best way they can.

Syncthing uses something akin to a Bittorrent protocol. Say you’re syncing videos from your phone, and they’re going to 3 machines. It doesn’t mean that Syncthing has to send it three times from the phone. Syncthing will send each block, most likely, just once; the other nodes in the swarm will register the block availability from the first other node to get it and will exchange blocks with themselves.

Syncthing will typically look for devices on the local LAN. Failing that, it will use an introduction server to see if it can reach them directly using P2P. Failing that, perhaps due to restrictive firewalls or NAT, communication can be relayed through volunteer-run Syncthing servers on the Internet. All Syncthing communications are cryptographically encrypted and verified. You can also configure Syncthing arbitrarily; for instance, to run over ssh or Tor tunnels.

So, let’s look at how Syncthing might help with the example I laid out up front.

All the devices at the remote location could communicate with each other. The Android app is quite capable of syncing photos and videos using Syncthing, for instance. Then one device could be taken to the Internet location and it would transmit data on behalf of all the others – perhaps back to a computer at your home, or to a server somewhere. Perhaps a script running on the remote server would then move files out of the syncthing synced folder into permanent storage elsewhere, triggering a deletion to be sent to the phone to free up storage. When the phone gets back to the other devices, the deletion can be propagated to them to free up storage there too.

Or maybe you have a computer out in a shed or somewhere without Internet access that you go to periodically, and need to get files to it. Again, your phone could be a carrier.

Taking it a step further

If you envision a file as a packet, you could, conceivably, do something like tunnel TCP/IP over Syncthing, assuming generous-enough timeouts. It can truly handle communication.

But you don’t need TCP/IP for this. Consider some other things you could do:

  • Drop a script in a special directory that gets picked up by a remote server and run
  • Drop emails in a special directory that get transmitted and then deleted by a remote system when they’re seen
  • Drop files (eg, photos or videos) in a directory that a remote system will copy or move out of there
  • Drop messages (perhaps gpg-encrypted) — which could be text files — for someone to see and process.
  • Drop NNTP bundles for group communication

You can start to see how there are a lot of possibilities here that extend beyond just file synchronization, though they are built upon a file synchronization tool.

Enter NNCP

Let’s look at a tool that’s especially suited for this: NNCP, which I’ve been writing about a lot lately.

NNCP is designed to handle file exchange and remote execution with remote computers in an asynchronous, store-and-forward manner. NNCP packets are themselves encrypted and authenticated. NNCP traditionally is source-routed (that is, you configure it so that machine A reaches machine D by relaying through B and C), and the packets are onion-routed. NNCP packets can be exchanged by a TCP call, a tar-like stream, copying files to something like a USB stick and physically transporting it to the remote, etc.

This works really well and I’ve been using it myself. But it gets complicated if the network topology isn’t fixed; it is difficult to reroute packets due to the onion routing, for instance. There are various workarounds that could be used — but why not just use Syncthing as a transport in those cases?

nncp-xfer is the command that exchanges packets by writing them to, and reading them from, a directory. It is what you’d use to exchange packets on a USB stick. And what you’d use to exchange packets via Syncthing. It writes packets in a RECIPIENT/SENDER/PACKET directory structure, so it is perfectly fine to have multiple systems exchanging packets in a single Syncthing synced folder tree. This structure also allows leaf nodes to only carry the particular packets they’re interested in. The packets are all encrypted, so they can be freely synced wherever.

Since Syncthing opportunistically syncs a shared folder with any device the folder is shared with, a phone could very easily be the NNCP transport, even if it has no idea what NNCP is. It could carry NNCP packets back and forth between sites, or to the Internet, or whatever.

NNCP supports file transmission, file request, and remote execution, all subject to controls, of course. It is easy to integrate with Exim or Postfix to use as a mail transport, Git transport, and so forth. I use it for backups. It would be quite easy to have it send those backups (encrypted zfs send) via nncp-xfer to Syncthing instead of the usual method, and then if I’ve shared the Syncthing folder with my phone, all I need to do is bring the phone into Internet range and they get sent. nncp-xfer will normally remove the packets out of the xfer directory as it ingests them, so the space will only be consumed on the phone (and laptop) until we know the packets made it to their destination.

Pretty slick, eh?

Remote Directory Tree Comparison, Optionally Asynchronous and Airgapped

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In the previous installment on store-and-forward backups, I mentioned how easy it is to do with ZFS, and some of the tools that can be used to do it without ZFS. A lot of those tools are a bit less robust, so we need some sort of store-and-forward mechanism to verify backups. To be sure, verifying backups is good with ANY scheme, and this could be used with ZFS backups also.

So let’s say you have a shiny new backup scheme in place, and you’d like to verify that it’s working correctly. To do that, you need to compare the source directory tree on machine A with the backed-up directory tree on machine B.

Assuming a conventional setup, here are some ways you might consider to do that:

  • Just copy everything from machine A to machine B and compare locally
  • Or copy everything from machine A to a USB drive, plug that into machine B, and compare locally
  • Use rsync in dry-run mode and see if it complains about anything

The first two options are not particularly practical for large datasets, though I note that the second is compatible with airgapping. Using rsync requires both systems to be online at the same time to perform the comparison.

What would be really nice here is a tool that would write out lots of information about the files on a system: their names, sizes, last modified dates, maybe even sha256sum and other data. This file would be far smaller than the directory tree itself, would compress nicely, and could be easily shipped to an airgapped system via NNCP, UUCP, a USB drive, or something similar.

Tool choices

It turns out there are already quite a few tools in Debian (and other Free operating systems) to do this, and half of them are named mtree (though, of course, not all mtrees are compatible with each other.) We’ll look at some of the options here.

I’ve made a simple test directory for illustration purposes with these commands:

mkdir test
cd test
echo hi > hi
ln -s hi there
ln hi foo
touch empty
mkdir emptydir
mkdir somethingdir
cd somethingdir
ln -s ../there

I then also used touch to set all files to a consistent timestamp for illustration purposes.

Tool option: getfacl (Debian package: acl)

This comes with the acl package, but can be used with other than ACL purposes. Unfortunately, it doesn’t come with a tool to directly compare its output with a filesystem (setfacl, for instance, can apply the permissions listed but won’t compare.) It ignores symlinks and doesn’t show sizes or dates, so is ineffective for our purposes.

Example output:

$ getfacl --numeric -R test
...
# file: test/hi
# owner: 1000
# group: 1000
user::rw-
group::r--
other::r--
...

Tool option: fmtree, the FreeBSD mtree (Debian package: freebsd-buildutils)

fmtree can prepare a “specification” based on a directory tree, and compare a directory tree to that specification. The comparison also is aware of files that exist in a directory tree but not in the specification. The specification format is a bit on the odd side, but works well enough with fmtree. Here’s a sample output with defaults:

$ fmtree -c -p test
...
# .
/set type=file uid=1000 gid=1000 mode=0644 nlink=1
.               type=dir mode=0755 nlink=4 time=1610421833.000000000
    empty       size=0 time=1610421833.000000000
    foo         nlink=2 size=3 time=1610421833.000000000
    hi          nlink=2 size=3 time=1610421833.000000000
    there       type=link mode=0777 time=1610421833.000000000 link=hi

... skipping ...

# ./somethingdir
/set type=file uid=1000 gid=1000 mode=0777 nlink=1
somethingdir    type=dir mode=0755 nlink=2 time=1610421833.000000000
    there       type=link time=1610421833.000000000 link=../there
# ./somethingdir
..

..

You might be wondering here what it does about special characters, and the answer is that it has octal escapes, so it is 8-bit clean.

To compare, you can save the output of fmtree to a file, then run like this:

cd test
fmtree < ../test.fmtree

If there is no output, then the trees are identical. Change something and you get a line of of output explaining each difference. You can also use fmtree -U to change things like modification dates to match the specification.

fmtree also supports quite a few optional keywords you can add with -K. They include things like file flags, user/group names, various tipes of hashes, and so forth. I'll note that none of the options can let you determine which files are hardlinked together.

Here's an excerpt with -K sha256digest added:

    empty       size=0 time=1610421833.000000000 \
                sha256digest=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    foo         nlink=2 size=3 time=1610421833.000000000 \
                sha256digest=98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4

If you include a sha256digest in the spec, then when you verify it with fmtree, the verification will also include the sha256digest. Obviously fmtree -U can't correct a mismatch there, but of course it will detect and report it.

Tool option: mtree, the NetBSD mtree (Debian package: mtree-netbsd)

mtree produces (by default) output very similar to fmtree. With minor differences (such as the name of the sha256digest in the output), the discussion above about fmtree also applies to mtree.

There are some differences, and the most notable is that mtree adds a -C option which reads a spec and converts it to a "format that's easier to parse with various tools." Here's an example:

$ mtree -c -K sha256digest -p test | mtree -C
. type=dir uid=1000 gid=1000 mode=0755 nlink=4 time=1610421833.0 flags=none 
./empty type=file uid=1000 gid=1000 mode=0644 nlink=1 size=0 time=1610421833.0 flags=none 
./foo type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./hi type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=hi time=1610421833.0 flags=none 
./emptydir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir/there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=../there time=1610421833.0 flags=none 

Most definitely an improvement in both space and convenience, while still retaining the relevant information. Note that if you want the sha256digest in the formatted output, you need to pass the -K to both mtree invocations. I could have done that here, but it is easier to read without it.

mtree can verify a specification in either format. Given what I'm about to show you about bsdtar, this should illustrate why I bothered to package mtree-netbsd for Debian.

Unlike fmtree, the mtree -U command will not adjust modification times based on the spec, but it will report on differences.

Tool option: bsdtar (Debian package: libarchive-tools)

bsdtar is a fascinating program that can work with many formats other than just tar files. Among the formats it supports is is the NetBSD mtree "pleasant" format (mtree -C compatible).

bsdtar can also convert between the formats it supports. So, put this together: bsdtar can convert a tar file to an mtree specification without extracting the tar file. bsdtar can also use an mtree specification to override the permissions on files going into tar -c, so it is a way to prepare a tar file with things owned by root without resorting to tools like fakeroot.

Let's look at how this can work:

$ cd test
$ bsdtar --numeric -cf - --format=mtree .
#mtree
. time=1610472086.318593729 mode=755 gid=1000 uid=1000 type=dir
./empty time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=0
./foo nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./hi nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./ormat\075mtree time=1610472086.318593729 mode=644 gid=1000 uid=1000 type=file size=5632
./there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=hi
./emptydir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir/there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=../there

You can use mtree -U to verify that as before. With the --options mtree: set, you can also add hashes and similar to the bsdtar output. Since bsdtar can use input from tar, pax, cpio, zip, iso9660, 7z, etc., this capability can be used to create verification of the files inside quite a few different formats. You can convert with bsdtar -cf output.mtree --format=mtree @input.tar. There are some foibles with directly using these converted files with mtree -U, but usually minor changes will get it there.

Side mention: stat(1) (Debian package: coreutils)

This tool isn't included because it won't operate recursively, but is a tool in the similar toolbox.

Putting It Together

I will still be developing a complete non-ZFS backup system for NNCP (or UUCP) in a future post. But in the meantime, here are some ideas you can reflect on:

  • Let's say your backup scheme involves sending a full backup every night. On the source system, you could pipe the generated tar file through something like tee >(bsdtar -cf bcakup.mtree @-) to generate an mtree file in-band while generating the tar file. This mtree file could be shipped over for verification.
  • Perhaps your backup scheme involves sending incremental backup data via rdup or even ZFS, but you would like to periodically verify that everything is good -- that an incremental didn't miss something. Something like mtree -K sha256 -c -x -p / | mtree -C -K sha256 would let you accomplish that.

I will further develop at least one of these ideas in a future post.

Bonus: cross-tool comparisons

In my mtree-netbsd packaging, I added tests like this to compare between tools:

fmtree -c -K $(MTREE_KEYWORDS) | mtree
mtree -c -K $(MTREE_KEYWORDS) | sed -e 's/\(md5\|sha1\|sha256\|sha384\|sha512\)=/\1digest=/' -e 's/rmd160=/ripemd160digest=/' | fmtree
bsdtar -cf - --options 'mtree:uname,gname,md5,sha1,sha256,sha384,sha512,device,flags,gid,link,mode,nlink,size,time,uid,type,uname' --format mtree . | mtree

More Topics on Store-And-Forward (Possibly Airgapped) ZFS and Non-ZFS Backups with NNCP

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In my previous post, I introduced a way to use ZFS backups over NNCP. In this post, I’ll expand on that and also explore non-ZFS backups.

Use of nncp-file instead of nncp-exec

The previous example used nncp-exec (like UUCP’s uux), which lets you pipe stdin in, then queues up a request to run a given command with that input on a remote. I discussed that NNCP doesn’t guarantee order of execution, but that for the ZFS use case, that was fine since zfs receive would just fail (causing NNCP to try again later).

At present, nncp-exec stores the data piped to it in RAM before generating the outbound packet (the author plans to fix this shortly) [Update: This is now fixed; use -use-tmp with nncp-exec!). That made it unusable for some of my backups, so I set it up another way: with nncp-file, the tool to transfer files to a remote machine. A cron job then picks them up and processes them.

On the machine being backed up, we have to find a way to encode the dataset to be received. I chose to do that as part of the filename, so the updated simplesnap-queue could look like this:

#!/bin/bash

set -e
set -o pipefail

DEST="`echo $1 | sed 's,^tank/simplesnap/,,'`"
FILE="bakfsfmt2-`date "+%s.%N".$$`_`echo "$DEST" | sed 's,/,@,g'`"

echo "Processing $DEST to $FILE" >&2
# stdin piped to this
zstd -8 - \
  | gpg --compress-algo none --cipher-algo AES256 -e -r 012345...  \
  | su nncp -c "/usr/local/nncp/bin/nncp-file -nice B -noprogress - 'backupsvr:$FILE'" >&2

echo "Queued $DEST to $FILE" >&2

I’ve added compression and encryption here as well; more on that below.

On the backup server, we would define a different incoming directory for each node in nncp.hjson. For instance:

host1: {
...
   incoming: "/var/local/nncp-bakcups-incoming/host1"
}

host2: {
...
   incoming: "/var/local/nncp-backups-incoming/host2"
}

I’ll present the scanning script in a bit.

Offsite Backup Rotation

Most of the time, you don’t want just a single drive to store the backups. You’d like to have a set. At minimum, one wouldn’t be plugged in so lightning wouldn’t ruin all your backups. But maybe you’d store a second drive at some other location you have access to (friend’s house, bank box, etc.)

There are several ways you could solve this:

  • If the remote machine is at a location with network access and you trust its physical security (remember that although it will store data encrypted at rest and will transport it encrypted, it will — in most cases — handle un-encrypted data during processing), you could of course send NNCP packets to it over the network at the same time you send them to your local backup system.
  • Alternatively, if the remote location doesn’t have network access or you want to keep it airgapped, you could transport the NNCP packets by USB drive to the remote end.
  • Or, if you don’t want to have any kind of processing capability remotely — probably a wise move — you could rotate the hard drives themselves, keeping one plugged in locally and unplugging the other to take it offsite.

The third option can be helped with NNCP, too. One way is to create separate NNCP installations for each of the drives that you store data on. Then, whenever one is plugged in, the appropriate NNCP config will be loaded and appropriate packets received and processed. The neighbor machine — the spooler — would just store up packets for the offsite drive until it comes back onsite (or, perhaps, your airgapped USB transport would do this). Then when it’s back onsite, all the queued up ZFS sends get replayed and the backups replicated.

Now, how might you handle this with NNCP?

The simple way would be to have each system generating backups send them to two destinations. For instance:

zstd -8 - | gpg --compress-algo none --cipher-algo AES256 -e -r 07D5794CD900FAF1D30B03AC3D13151E5039C9D5 \
  | tee >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk1:$FILE'") \
        >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk2:$FILE'") \
   > /dev/null

You could probably also more safely use pee(1) (from moreutils) to do this.

This has an unfortunate result of doubling the network traffic from every machine being backed up. So an alternative option would be to queue the packets to the spooling machine, and run a distribution script from it; something like this, in part:

INCOMINGDIR="/var/local/nncp-bakfs-incoming"
LOCKFILE="$INCOMINGDIR/.lock"
printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE"
if dotlockfile -r 0 -l -p "${LOCKFILE}"; then
  logit "Lock obtained at ${LOCKFILE} with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM
else
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0
fi


logit "Scanning queue directory..."
cd "$INCOMINGDIR"
for HOST in *; do
   cd "$INCOMINGDIR/$HOST"
   for FILE in bakfsfmt2-*; do
           if [ -f "$FILE" ]; then
                   for BAKFS in backupdisk1 backupdisk2; do
                           runcommand nncp-file -nice B+5 -noprogress "$FILE" "$BAKFS:$HOST/$FILE"
                   done
                   runcommand rm "$FILE"
           else
                   logit "$HOST: Skipping $FILE since it doesn't exist"
           fi
   done

done
logit "Scan complete."

Security Considerations

You’ll notice that in my example above, the encryption happens as the root user, but nncp is called under su. This means that even if there is a vulnerability in NNCP, the data would still be protected by GPG. I’ll also note here that many sites run ssh as root unnecessarily; the same principles should apply there. (ssh has had vulnerabilities in the past as well). I could have used gpg’s built-in compression, but zstd is faster and better, so we can get good performance by using fast compression and piping that to an algorithm that can use hardware acceleration for encryption.

I strongly encourage considering transport, whether ssh or NNCP or UUCP, to be untrusted. Don’t run it as root if you can avoid it. In my example, the nncp user, which all NNCP commands are run as, has no access to the backup data at all. So even if NNCP were compromised, my backup data wouldn’t be. For even more security, I could also sign the backup stream with gpg and validate that on the receiving end.

I should note, however, that this conversation assumes that a network- or USB-facing ssh or NNCP is more likely to have an exploitable vulnerability than is gpg (which here is just processing a stream). This is probably a safe assumption in general. If you believe gpg is more likely to have an exploitable vulnerability than ssh or NNCP, then obviously you wouldn’t take this particular approach.

On the zfs side, the use of -F with zfs receive is avoided; this could lead to a compromised backed-up machine generating a malicious rollback on the destination. Backup zpools should be imported with -R or -N to ensure that a malicious mountpoint property couldn’t be used to cause an attack. I choose to use “zfs receive -u -o readonly=on” which is compatible with both unmounted backup datasets and zpools imported with -R (or both). To access the data in a backup dataset, you would normally clone it and access it there.

The processing script

So, put this all together and look at an example of a processing script that would run from cron as root and process the incoming ZFS data.

#!/bin/bash
set -e
set -o pipefail

# Log a message
logit () {
   logger -p info -t "`basename "$0"`[$$]" "$1"
}

# Log an error message
logerror () {
   logger -p err -t "`basename "$0"`[$$]" "$1"
}

# Log stdin with the given code.  Used normally to log stderr.
logstdin () {
   logger -p info -t "`basename "$0"`[$$/$1]"
}

# Run command, logging stderr and exit code
runcommand () {
   logit "Running $*"
   if "$@" 2> >(logstdin "$1") ; then
      logit "$1 exited successfully"
      return 0
   else
       RETVAL="$?"
       logerror "$1 exited with error $RETVAL"
       return "$RETVAL"
   fi
}

STORE=backups/simplesnap
INCOMINGDIR=/backups/nncp/incoming

if ! [ -d "$INCOMINGDIR" ]; then
        logerror "$INCOMINGDIR doesn't exist"
        exit 0
fi

LOCKFILE="/backups/nncp/.nncp-backups-zfs-scan.lock"
printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE"
if dotlockfile -r 0 -l -p "${LOCKFILE}"; then
  logit "Lock obtained at ${LOCKFILE} with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM
else
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0
fi

EXITCODE=0


cd "$INCOMINGDIR"
logit "Scanning queue directory..."
for HOST in *; do
    HOSTPATH="$INCOMINGDIR/$HOST"
    # files like backupsfmt2-134.13134_dest
    for FILE in "$HOSTPATH"/backupsfmt2-[0-9]*_?*; do
        if [ ! -f "$FILE" ]; then
            logit "Skipping non-existent $FILE"
            continue
        fi

        # Now, $DEST will be HOST/DEST.  Strip off the @ also.
        DEST="`echo "$FILE" | sed -e 's/^.*backupsfmt2[^_]*_//' -e 's,@,/,g'`"

        if [ -z "$DEST" ]; then
            logerror "Malformed dest in $FILE"
            continue
        fi
        HOST2="`echo "$DEST" | sed 's,/.*,,g'`"
        if [ -z "$HOST2" ]; then
            logerror "Malformed DEST $DEST in $FILE"
            continue
        fi

        if [ ! "$HOST" = "$HOST2" ]; then
            logerror "$DIR: $HOST doesn't match $HOST2"
            continue
        fi

        logit "Processing $FILE to $STORE/$DEST"
            if runcommand gpg -q -d < "$FILE" | runcommand zstdcat | runcommand zfs receive -u -o readonly=on "$STORE/$DEST"; then
                logit "Successfully processed $FILE to $STORE/$DEST"
                runcommand rm "$FILE"
        else
                logerror "FAILED to process $FILE to $STORE/$DEST"
                EXITCODE=15
        fi

Applying These Ideas to Non-ZFS Backups

ZFS backups made our job easier in a lot of ways:

  • ZFS can calculate a diff based on an efficiently-stored previous local state (snapshot or bookmark), rather than a comparison to a remote state (rsync)
  • ZFS "incremental" sends, while less efficient than rsync, are reasonably efficient, sending only changed blocks
  • ZFS receive detects and enforces that the incremental source on the local machine must match the incremental source of the original stream, enforcing ordering
  • Datasets using ZFS encryption can be sent in their encrypted state
  • Incrementals can be done without a full scan of the filesystem

Some of these benefits you just won't get without ZFS (or something similar like btrfs), but let's see how we could apply these ideas to non-ZFS backups. I will explore the implementation of them in a future post.

When I say "non ZFS", I am being a bit vague as to whether the source, the destination, or both systems are running a non-ZFS filesystem. In general I'll assume that neither are ZFS.

The first and most obvious answer is to just tar up the whole system and send that every day. This is, of course, only suitable for small datasets on a fast network. These tarballs could be unpacked on the destination and stored more efficiently via any number of methods (hardlink trees, a block-level deduplicator like borg or rdedup, or even just simply compressed tarballs).

To make the network trip more efficient, something like rdiff or xdelta could be used. A signature file could be stored on the machine being backed up (generated via tee/pee at stream time), and the next run could simply send an rdiff delta over NNCP. This would be quite network-efficient, but still would require reading every byte of every file on every backup, and would also require quite a bit of temporary space on the receiving end (to apply the delta to the previous tarball and generate a new one).

Alternatively, a program that generates incremental backup files such as rdup could be used. These could be transmitted over NNCP to the backup server, and unpacked there. While perhaps less efficient on the network -- every file with at least one modified byte would be retransmitted in its entirety -- it avoids the need to read every byte of unmodified files or to have enormous temporary space. I should note here that GNU tar claims to have an incremental mode, but it has a potential data loss bug.

There are also some tools with algorithms that may apply well in this use care: syrep and fssync being the two most prominent examples, though rdedup (mentioned above) and the nascent asuran project may also be combinable with other tools to achieve this effect.

I should, of course, conclude this section by mentioning btrfs. Every time I've tried it, I've run into serious bugs, and its status page indicates that only some of them have been resolved. I would not consider using it for something as important as backups. However, if you are comfortable with it, it is likely to be able to run in more constrained environments than ZFS and could probably be processed in much the same way as zfs streams.

How & Why To Use Airgapped Backups

A good backup strategy needs to consider various threats to the integrity of data. For instance:

  • Building catches fire
  • Accidental deletion
  • Equipment failure
  • Security incident / malware / compromise

It’s that last one that is of particular interest today. A lot of backup strategies are such that if a user (or administrator) has their local account or network compromised, their backups could very well be destroyed as well. For instance, do you ssh from the account being backed up to the system holding the backups? Or rsync using a keypair stored on it? Or access S3 buckets, etc? It is trivially easy in many of these schemes to totally ruin cloud-based backups, or even some other schemes. rsync can be run with –delete (and often is, to prune remotes), S3 buckets can be deleted, etc. And even if you try to lock down an over-network backup to be append-only, still there are vectors for attack (ssh credentials, OpenSSL bugs, etc). In this post, I try to explore how we can protect against them and still retain some modern conveniences.

A backup scheme also needs to make a balance between:

  • Cost
  • Security
  • Accessibility
  • Efficiency (of time, bandwidth, storage, etc)

My story so far…

About 20 years ago, I had an Exabyte tape drive, with the amazing capacity of 7GB per tape! Eventually as disk prices fell, I had external disks plugged in to a server, and would periodically rotate them offsite. I’ve also had various combinations of partial or complete offsite copies over the Internet as well. I have around 6TB of data to back up (after compression), a figure that is growing somewhat rapidly as I digitize some old family recordings and videos.

Since I last wrote about backups 5 years ago, my scheme has been largely unchanged; at present I use ZFS for local and to-disk backups and borg for the copies over the Internet.

Let’s take a look at some options that could make this better.

Tape

The original airgapped backup. You back up to a tape, then you take the (fairly cheap) tape out of the drive and put in another one. In cost per GB, tape is probably the cheapest medium out there. But of course it has its drawbacks.

Let’s start with cost. To get a drive that can handle capacities of what I’d be needing, at least LTO-6 (2.5TB per tape) would be needed, if not LTO-7 (6TB). New, these drives cost several thousand dollars, plus they need LVD SCSI or Fibre Channel cards. You’re not going to be hanging one off a Raspberry Pi; these things need a real server with enterprise-style connectivity. If you’re particularly lucky, you might find an LTO-6 drive for as low as $500 on eBay. Then there are tapes. A 10-pack of LTO-6 tapes runs more than $200, and provides a total capacity of 25TB – sufficient for these needs (note that, of course, you need to have at least double the actual space of the data, to account for multiple full backups in a set). A 5-pack of LTO-7 tapes is a little more expensive, while providing more storage.

So all-in, this is going to be — in the best possible scenario — nearly $1000, and possibly a lot more. For a large company with many TB of storage, the initial costs can be defrayed due to the cheaper media, but for a home user, not so much.

Consider that 8TB hard drives can be found for $150 – $200. A pair of them (for redundancy) would run $300-400, and then you have all the other benefits of disk (quicker access, etc.) Plus they can be driven by something as cheap as a Raspberry Pi.

Fancier tape setups involve auto-changers, but then you’re not really airgapped, are you? (If you leave all your tapes in the changer, they can generally be selected and overwritten, barring things like hardware WORM).

As useful as tape is, for this project, it would simply be way more expensive than disk-based options.

Fundamentals of disk-based airgapping

The fundamental thing we need to address with disk-based airgapping is that the machines being backed up have no real-time contact with the backup storage system. This rules out most solutions out there, that want to sync by comparing local state with remote state. If one is willing to throw storage efficiency out the window — maybe practical for very small data sets — one could just send a full backup daily. But in reality, what is more likely needed is a way to store a local proxy for the remote state. Then a “runner” device (a USB stick, disk, etc) could be plugged into the network, filled with queued data, then plugged into the backup system to have the data dequeued and processed.

Some may be tempted to short-circuit this and just plug external disks into a backup system. I’ve done that for a long time. This is, however, a risk, because it makes those disks vulnerable to whatever may be attacking the local system (anything from lightning to ransomware).

ZFS

ZFS is, it should be no surprise, particularly well suited for this. zfs send/receive can send an incremental stream that represents a delta between two checkpoints (snapshots or bookmarks) on a filesystem. It can do this very efficiently, much more so than walking an entire filesystem tree.

Additionally, with the recent addition of ZFS crypto to ZFS on Linux, the replication stream can optionally reflect the encrypted data. Yes, as long as you don’t need to mount them, you can mostly work with ZFS datasets on an encrypted basis, and can directly tell zfs send to just send the encrypted data instead of the decrypted data.

The downside of ZFS is the resource requirements at the destination, which in terms of RAM are higher than most of the older Raspberry Pi-style devices. Still, one could perhaps just save off zfs send streams and restore them later if need be, but that implies a periodic resend of a full stream, an inefficient operation. dedpulicating software such as borg could be used on those streams (though with less effectiveness if they’re encrypted).

Tar

Perhaps surprisingly, tar in listed incremental mode can solve this problem for non-ZFS users. It will keep a local cache of the state of the filesystem as of the time of the last run of tar, and can generate new tarballs that reflect the changes since the previous run (even deletions). This can achieve a similar result to the ZFS send/receive, though in a much less elegant way.

Bacula / Bareos

Bacula (and its fork Bareos) both have support for a FIFO destination. Theoretically this could be used to queue of data for transfer to the airgapped machine. This support is very poorly documented in both and is rumored to have bitrotted, however.

rdiff and xdelta

rdiff and xdelta can be used as sort of a non-real-time rsync, at least on a per-file basis. Theoretically, one could generate a full backup (with tar, ZFS send, or whatever), take an rdiff signature, and send over the file while keeping the signature. On the next run, another full backup is piped into rdiff, and on the basis of the signature file of the old and the new data, it produces a binary patch that can be queued for the backup target to update its stored copy of the file.

This leaves history preservation as an exercise to be undertaken on the backup target. It may not necessarily be easy and may not be efficient.

rsync batches

rsync can be used to compute a delta between two directory trees and express this as a single-file batch that can be processed by a remote rsync. Unfortunately this implies the sender must always keep an old tree around (barring a solution such as ZFS snapshots) in order to compute the delta, and of course it still implies the need for history processing on the remote.

Getting the Data There

OK, so you’ve got an airgapped system, some sort of “runner” device for your sneakernet (USB stick, hard drive, etc). Now what?

Obviously you could just copy data on the runner and move it back off at the backup target. But a tool like NNCP (sort of a modernized UUCP) offer a lot of help in automating the process, returning error reports, etc. NNCP can be used online over TCP, over reliable serial links, over ssh, with offline onion routing via intermediaries or directly, etc.

Imagine having an airgapped machine at a different location you go to frequently (workplace, friend, etc). Before leaving, you put a USB stick in your pocket. When you get there, you pop it in. It’s despooled and processed while you want, and return emails or whatever are queued up to be sent when you get back home. Not bad, eh?

Future installment…

I’m going to try some of these approaches and report back on my experiences in the next few weeks.

Backing up every few minutes with simplesnap

I’ve written a lot lately about ZFS, and one of its very nice features is the ability to make snapshots that are lightweight, space-efficient, and don’t hurt performance (unlike, say, LVM snapshots).

ZFS also has “zfs send” and “zfs receive” commands that can send the content of the snapshot, or a delta between two snapshots, as a data stream – similar in concept to an amped-up tar file. These can be used to, for instance, very efficiently send backups to another machine. Rather than having to stat() every single file on a filesystem as rsync has to, it sends effectively an intelligent binary delta — which is also intelligent about operations such as renames.

Since my last search for backup tools, I’d been using BackupPC for my personal systems. But since I switched them to ZFS on Linux, I’ve been wanting to try something better.

There are a lot of tools out there to take ZFS snapshots and send them to another machine, and I summarized them on my wiki. I found zfSnap to work well for taking and rotating snapshots, but I didn’t find anything that matched my criteria for sending them across the network. It seemed par for the course for these tools to think nothing of opening up full root access to a machine from others, whereas I would much rather lock it down with command= in authorized_keys.

So I wrote my own, called simplesnap. As usual, I wrote extensive documentation for it as well, even though it is very simple to set up and use.

So, with BackupPC, a backup of my workstation took almost 8 hours. (Its “incremental” might take as few as 3 hours) With ZFS snapshots and simplesnap, it takes 25 seconds. 25 seconds!

So right now, instead of backing up once a day, I back up once an hour. There’s no reason I couldn’t back up every 5 minutes, in fact. The data consumes less space, is far faster to manage, and doesn’t require a nightly hours-long cleanup process like BackupPC does — zfs destroy on a snapshot just takes a few seconds.

I use a pair of USB disks for backups, and rotate them to offsite storage periodically. They simply run ZFS atop dm-crypt (for security) and it works quite well even on those slow devices.

Although ZFS doesn’t do file-level dedup like BackupPC does, and the lz4 compression I’ve set ZFS to use is less efficient than the gzip-like compression BackupPC uses, still the backups are more space-efficient. I am not quite sure why, but I suspect it’s because there is a lot less metadata to keep track of, and perhaps also because BackupPC has to store a new copy of a file if even a byte changes, whereas ZFS can store just the changed blocks.

Incidentally, I’ve packaged both zfSnap and simplesnap for Debian and both are waiting in NEW.

rdiff-backup, ZFS, and rsync scripts

rdiff-backup vs. ZFS

As I’ve been writing about backups, I’ve gone ahead and run some tests with rdiff-backup. I have been using rdiff-backup personally for many years now — probably since 2002, when I packaged it up for Debian. It’s a nice, stable system, but I always like to look at other options for things every so often.

rdiff-backup stores an uncompressed current mirror of the filesystem, similar to rsync. History is achieved by the use of compressed backwards binary deltas generated by rdiff (using the rsync algorithm). So, you can restore the current copy very easily — a simple cp will do if you don’t need to preserve permissions. rdiff-backup restores previous copies by applying all necessary binary deltas to generate the previous version.

Things I like about rdiff-backup:

  1. Bandwidth-efficient
  2. Reasonably space-efficient, especially where history is concerned
  3. Easily scriptable and nice CLI
  4. Unlike tools such as duplicity, there is no need to periodically run full backups — old backups can be deleted without impacting the ability to restore more current backups

Things I don’t like about it:

  1. Speed. It can be really slow. Deleting 3 months’ worth of old history takes hours. It has to unlink vast numbers of files — and that’s pretty much it, but it does it really slowly. Restores, backups, etc. are all slow as well. Even just getting a list of your increment sizes so you’d know how much space would be saved can take a very long time.
  2. The current backup copy is stored without any kind of compression, which is not at all space-efficient
  3. It creates vast numbers of little files that take forever to delete or summarize

So I thought I would examine how efficient ZFS would be. I wrote a script that would replay the rdiff-backup history — first it would rsync the current copy onto the ZFS filesystem and make a ZFS snapshot. Then each previous version was processed by my script (rdiff-backup’s files are sufficiently standard that a shell script can process them), and a ZFS snapshot created after each. This lets me directly compare the space used by rdiff-backup to that used by ZFS using actual history.

I enabled gzip-3 compression and block dedup in ZFS.

My backups were nearly 1TB in size and the amount of space I had available for ZFS was roughly 600GB, so I couldn’t test all of them. As it happened, I tested the ones that were the worst-case scenario for ZFS: my photos, music collection, etc. These files had very little duplication and very little compressibility. Plus a backup of my regular server that was reasonably compressible.

The total size of the data backed up with rdiff-backup was 583 GB. With ZFS, this came to 498GB. My dedup ratio on this was only 1.05 (meaning 5% or 25GB saved). The compression ratio was 1.12 (60GB saved). The combined ratio was 1.17 (85GB saved). Interestingly 498 + 85 = 583.

Remember that the data under test here was mostly a worst-case scenario for ZFS. It would probably have done better had I had the time to throw the rest of my dataset at it (such as the 60GB backup of my iPod, which would have mostly deduplicated with the backup of my music server).

One problem with ZFS is that dedup is very memory-hungry. This is common knowledge and it is advertised that you need to use roughly 2GB of RAM per TB of disk when using dedup. I don’t have quite that much to dedicate to it, so ZFS got VERY slow and thrashed the disk a lot after the ARC grew to about 300MB. I found some tweakables in zfsrc and the zfs command that let me tweak the ARC cache to grow bigger. But the machine in question only has 2GB RAM, and is doing lots of other things as well, so this barely improved anything. Note that this dedup RAM requirement is not out of line with what is expected from these sorts of solutions.

Even if I got absolutely stellar dedup ratio of 2:1, that would get me at most 1TB. The cost of buying a 1TB disk is less than the cost of upgrading my system to 4GB RAM, so dedup isn’t worth it here.

I think the lesson is: think carefully about where dedup makes sense. If you’re storing a bunch of nearly-identical virtual machine images — the sort of canonical use case for this — go for it. A general fileserver — well, maybe you should just add more disk instead of more RAM.

Then that raises the question: if I don’t need dedup from ZFS, do I bother with it at all, or just use ext4 and LVM snapshots? I think ZFS still makes sense, given its built-in support for compression and very fast snapshots — LVM snapshots are known to cause serious degradation to write performance once enabled, which ZFS doesn’t.

So I plan to switch my backups to use ZFS. A few observations on this:

  1. Some testing suggests that the time to delete a few months of old snapshots will be a minute or two with ZFS compared to hours with rdiff-backup.
  2. ZFS has shown itself to be more space-efficient than rdiff-backup, even without dedup enabled.
  3. There are clear performance and convenience wins with ZFS.
  4. Backup Scripts

    So now comes the question of backup scripts. rsync is obviously a pretty nice choice here — and if used with –inplace perhaps even will play friendly with ZFS snapshots even if dedup is off. But let’s say I’m backing up a few machines at home, or perhaps dozens at work. There is a need to automate all of this. Specifically, there’s a need to:

    1. Provide scheduling, making sure that we don’t hammer the server with 30 clients all at once
    2. Provide for “run before” jobs to do things like snapshot databases
    3. Be silent on success and scream loudly via emails to administrators on any kind of error… and keep backing up other systems when there is an error
    4. Create snapshots and provide an automated way to remove old snapshots (or mount them for reading, as ZFS-fuse doesn’t support the .zfs snapshot directory yet)

    To date I haven’t found anything that looks suitable. I found a shell script system called rsbackup that does a large part of this, but something about using a script whose homepage is a forum makes me less than 100% confident.

    On the securing the backups front, rsync comes with a good-looking rrsync script (inexplicably installed under /usr/share/doc/rsync/scripts instead of /usr/bin on Debian) that can help secure the SSH authorization. GNU rush also looks like a useful restricted shell.

Bacula

Lately we’ve been looking at backup solutions at work.

And I’ve got to say that Bacula is looking downright awesome. It’s GPL’d and it has just about every feature a person could ask for.

I am a complete Bacula newbie. Today, after using Bacula for a total of about 30-60 minutes, I added the first client machine to my Linux test box. The client machine was running the Windows bacula client. It took about 10 minutes to install and configure the client and the server. And both backup and restore worked perfectly the first time. Nice. Setting up a *nix client is even easier.

I’ve been using Amanda for many years at home and at various workplaces. Looks like we’re going to be switching.

We’ve also ordered an HP MSL4048, a 48-tape LTO3 library with barcode support. Each tape has a native storage capacity of 400GB. Should be nice when it arrives. With that library and Bacula, we should be able to back up all our servers using a single backup system. And both our Windows and Unix people can manage the system, including running restores to any machine, from any authorized console machine.