Thursday 02 Apr 2026


Document incremental backup and test restore (ZFS)

How I backup my documents dataset

I am using B2 (Backblaze) to store my document dataset. Every night, an incremental daily send is generated using my orginal full dataset, then the same occurs weekly. If I need to restore, I can grab the full and last weekly, then increment up daily.

Here is how my backup appears on B2 using rclone:

2936020620 20260304_documents[full].zfs
  1546260 20260309_documents[weekly][inc].zfs
 16503316 20260320_documents[weekly][inc].zfs
  2368844 20260321_documents[daily][inc].zfs
   811956 20260322_documents[daily][inc].zfs
 16506732 20260322_documents[weekly][inc].zfs
   869204 20260323_documents[daily][inc].zfs
    38132 20260324_documents[daily][inc].zfs
  1059516 20260325_documents[daily][inc].zfs
   935004 20260326_documents[daily][inc].zfs
  1786492 20260327_documents[daily][inc].zfs
  1071804 20260328_documents[daily][inc].zfs
   991740 20260329_documents[daily][inc].zfs
 16459876 20260329_documents[weekly][inc].zfs
    99644 20260330_documents[daily][inc].zfs
  2205524 20260331_documents[daily][inc].zfs

Therefore to get back to 20260331, I download only these dataset backups, then restore in the following order:

  1. 20260304_documents[full].zfs
  2. 20260329_documents[weekly][inc].zfs
  3. 20260330_documents[daily][inc].zfs
  4. 20260331_documents[daily][inc].zfs

Each night this command runs:

=

Weekly:

#!/bin/sh
# Weekly document increment to reduce restore time

DAILY=`date +"%Y%m%d"`
DRIVE=/drive
RCLONE_CONFIG=RCLONE_REMOTE
BUCKET=NAME_OF_YOUR_BUCKET
RCLONE_CONFIG=/path/rclone.conf

###########################################################################################################

# ZFS send documents dataset
zfs send --raw -i rust/my_documents#20260304_b2_full rust/my_documents@Daily_${DAILY} \
| mbuffer -m 256m | \
rclone rcat ${RCLONE_CONFIG}:${BUCKET}/${DAILY}_documents[weekly][inc].zfs --size-only \
--config=${RCLONE_CONFIG}

###########################################################################################################

I am using mbuffer for a stable transfer, and the rclone rcat command to stream the dataset straight to B2 without first creating the increment dataset on disk.

I always specify my rclone config with scripts.

How to restore my document increment

A backup is no good if it cannot be restored. So I peridocially test this.

Download last [weekly] increment, remember I need the last full:

Then run the command:

sudo zfs recv -u rust/test_doc_restore < "20260329_documents[weekly][inc].zfs"

If I have automatic snapshots, I need to rollback the test document restore dataset

sudo zfs rollback rust/test_doc_restore@Daily_20260304 -r

To test how ZFS would react to a bit flip, I used a python script to flip one bit. When I tried to restore I instantly got this error:

cannot receive incremental stream: checksum mismatch

This is a excellent hands-off method to keep my faster changing dataset (documents) backup. This occurs every night and week. I backup all my datasets to my other backup NAS weekly, so this backup will capture changed files since my last NAS backup occured.


Backlinks:
index
Journal:Index
Journal:2026:04