This is a part 2 of a tour of the Bvckup's brand new copying engine dubbed the
covered improvements to the engine's
copying and in this post we will look at changes to its incremental file updater, a.k.a. the
is a way of
files whereby only blocks that changed since the last update are copied over.
When files change, more often than not they change only here and there, in an
. This makes delta copying a godsend for backing up all sorts of large, slowly changing files from large RAW photos, to database files, to VM images and encrypted file containers.
If you are familiar with
, it is based on the same idea to derive the same speed benefits. However where Bvckup 2 differs from rsync is how it does
The recipe is simple - split the file in large-ish blocks, compute block hashes and then see if the hash changed from the last time we looked at the file. If it did, we got a modified block, so we copy it over.
We still end up
a file in full, but we save time on being selective with
, which is usually an expensive thing to do.
works by saving block hashes between the runs and then using these as a reference point on the next update.
hash from the last time
This requires only basic IO access to source/backup copies and some local space to store the block hashes. That is, the delta copying works with
any storage devices
that are accessible for reading/writing.
* In comparison,
requires a copy of itself to be running on receiving end (that's when copying over the network). It doesn't require any interim storage, but it ends up reading
source and target files in full. This makes rsync poorly suited for
copying and it also explains the
in its name.
It is possible that two different blocks of data will have the same hash. This is a so-called
With delta copying if we ever run into a hash collision, we will end up skipping a modified block ... which is something we most certainly don't want to be doing.
Avoiding false negatives comes down to reducing the chances of running into them on a real-world data. This is done by using
- a property that causes a hash to change drastically in response to small changes to the input.
is another option.
computed with different algorithms is yet another one.
Old delta setup
From its very first public release and up to Release 79 the program used the following delta copying setup:
+ a variant of
This yields 20 bytes of hash per block, meaning that the delta state of a file is about
of its size.
In addition to hashing individual blocks, delta copier was also computing a SHA-1 hash of the
and storing it with the delta state.
The purpose of this hash was to catch cases when no modified blocks were detected but the full-file hash changed.
a very good idea. It added very little in terms of the false negative protection
it created a
See, SHA-1 can't be parallelized. In fact, very few cryptographic hash functions can be and they all expect to be fed blocks in sequence, one by one.
This means that performance of the delta copier was effectively
bounded by the speed of SHA-1
on a single CPU core.
For mechanical drives it's usually not an issue, because they are quite slow. But SSDs and NVMe drives can be read faster than we can hash what we get from them:
Intel i3 @ 2.53GHz
Xeon E5 v4 @ 2.10GHz
Samsung T3, USB 3.0
Intel i5 @ 2.70GHz
Samsung 860 Pro
* These are
rates for the drives and bulk rates for SHA-1 using a well-optimized implementation.
So the full-file hash
a bottleneck and it should be eliminated, which is exactly what the ultra copier in delta mode does.
New delta setup
Starting with Release 79 the delta copying uses a larger block size, replaces MD5 with
and compliments it with two weaker, but
fast hashes called
Per-block hash size is now 32 bytes - 16 from Blake2b, 8 from xxHash and 8 from Spooky, but the blocks are twice as large, so the ratio of delta-state / file-size is actually lower, ~
Impact on performance
Block hashing is now noticeably faster:
Non-crypto hashes, xxHash and Spooky, are running at 10.8 GB/s and 11.7 GB/s respectively. That's
per second, so these hashes effectively come free of charge.
But more importantly, we are no longer constricted by the full-file hash, so we can push these 762 MB/s
* Numbers are obviously specific to our test machine, but they give a good idea of relative performance of the algorithms.
The net effect of these changes is that the delta copier now can
an IO of a very fast drive while using just a small handful of hashing threads.
This in turn means that delta copying now comes with virtually
no performance impact
even when it ends up copying files in full. It basically goes as fast as a regular bulk copier.
file from an
drive using the old delta copier vs the new one:
Copying file afresh now runs at the
of the target drive and updating it afterwards (with little to no changes) runs at the
of the source drive.
Needless to say, that's a huge improvement.
Next up is the
of the ultra copier series. It covers resuming of interrupted copies and fast error recovery.