The second improvement to the handling of cloud-stored files in
has to do with gracefully tolerating these files being offline.
Call it an
if you will.
We will try and back these files up if we can, but we won't make much fuss if we can't.
When run, every backup job goes through 3 main phases:
creates file indexes of the source and backup locations.
compares resulting two indexes, finds the differences and compiles a list of basic steps -
- that, when executed in order, bring the backup in sync with the source.
merely goes through the plan and diligently
Having a formal planning phase comes with one
important benefit - it allows extending core backup logic with a minimum amount of changes, making it a very simple, fast and predictable process.
In case of offline files, the opportunistic backup is implemented by simply
scheduled file update
the source file has a
Simple, which is why the change can be rolled out as a part of a smaller update rather than needing to wait for the next major release.
* The same mechanism is already used by the
that disables file updates when the backup copy is newer than the source.
It works the same way - an update is scheduled using regular file comparison logic, and then an additional check is made to compare file times and to disable the planned action if needed.