In action (.gif)
It is now possible to
pre-compute delta copying state
for large files when taking over existing backups with Bvckup 2.
The way the
works is that it relies on block hashes saved from the last file update to detect which parts of the file changed since then and copy over only these modified blocks.
This means that the actual
part of the copying kicks in only on the
update, because when we are copying file for the first time, we don't have the block hashes yet. Once the file is copied, we have the hashes and all set to do the delta updating.
This inability to detect block changes on the
run is not a big deal. Most of the times we are adding
to a backup and these need to be copied in full anyway, so we do just that and initialize their delta state at the same time.
However, when taking over
, it presents a slight problem.
Consider the case when you spent a day copying a very large file over a slow link to an off-site storage. Then, you created a backup job to handle all further updates.
If you are to touch the file now and run the backup, there'll be no block hashes yet, so the file will be re-copied in full. If only we could somehow initialize the delta copying state without copying a file...
Enter delta state pre-comp
Release 79.13 adds support for pre-computing block hashes for all files in the backup that qualify for delta copying but not yet have the delta state.
That is, if you put an existing backup under Bvckup's control, you can now
flip a switch
, do a quick run and then have all files ready for delta copying
on their first update
here assumes that your source is local, reads are cheap and writes are expensive, which is how the setup
for making full use of delta copying benefits.
above shows taking over a Debian VM backup that is initially created by just copy-pasting VM folder in Windows Explorer.
Running a backup executes delta pre-comp for .vmdk files and patches up "created" times on backup items - all without writing a single byte to destination.