Run-time memory usage is now capped.

Prior to Release 74 app's memory usage was proportional to the number of items in the backup. This was because both the source and destination file trees were kept in memory in their entirety, as was the backup plan. While not a big deal for smaller backups, it does translate into quite a bit of RAM when the file count starts to climb into a million range.

Release 74 implements the off-RAM storage for large data sets. In previous release the run-time arrangement was like this -


With the new release the objects are kept in the app's memory only if they are in active use. The rest is trickled down into a swap file for storage, through a couple of intermediate caches.


The object cache holds fully assembled instances of application data objects and its size is kept at or below preset item count.

When the object cache grows too big, least recently used objects are written down to the swap file. The writes are passed through a typical page cache to coalesce the I/O into larger chunks.

This lunch is obviously not free.

There's a cost to even successful cache lookups and there's most certainly an expense to doing the disk I/O.

The app mitigates this by using aggressive cache optimization (buckets, MRU lookups, etc), raw IO caching and also by allowing keeping swap files in a custom location, and presumably on a different physical drive. But, again, all this starts to really matter only when backups grow into a million item range.

While reasonable defaults are provided, the cache sizes, the trimming policy and other details are fully configurable through the .ini files. The app may also be set up to log cache/swap performance if required.

This change is big, it took nearly two months to implement and it means that Bvckup 2 can now handle backups of unlimited size. To say that it's a very big deal would be an understatement.