The support forum

Advanced bvckup2-engine.ini settings

adrian_green :

Aug 24, 2016

I'm realtime synchronizing about 3TB data comprising of up to 10 million files in around 100,000 folders.

Thinking about wear and tear on SSD drives, I note that there are
~scanner-dst and ~scanner-src swp
files written out constantly - clearly to save on ram.
I see that the default size of these is set in the ini to be [8388608]:
swap.max_pages    8
swap.page_size       1048576

Then there is the alluring swap.location key which has no value.
Can I specify an absolute or relative path with this?

Anyway, I'm going to try to symlink the entire config folder to non SSD location.

What do the trim* keys acheive ?
eg:
trim_ws          0
trim_ws_cap  8388608
trim_ws_val   4:5

What does prep_net_backups indicate?

Is there a document that describes the keys both in the main config and the profile config? It would be super useful to access it.

For example this key:
conf.scanning.cap_memory_use 1
What happens if it is set to false?  If I have a machine with 32G of spare ram can I configure bvckup 2 to use it?

Alex Pankratov :

Aug 24, 2016

Thinking about wear and tear on SSD drives, I note that there are ~scanner-dst and ~scanner-src swp files written out constantly - clearly to save on ram.


Indeed. Here's a bit of background - https://bvckup2.com/wip/27052015

swap.max_pages and swap.page_size shape the "bulk IO cache" layer.

Can I specify an absolute or relative path with this?


It should be an absolute path. Relative will work too, but bvckup2 doesn't set up current directory when it starts up.

What do the trim* keys acheive ?


This is a pure vanity feature. "ws" stands for "Working Set" [1]. When enabled, this feature will force bvckup2 to periodically trim its own working set (ws). There is absolutely no practical reason to do that. The only effect is that the "Working Set" metric in Process Explorer will go down. Frankly, I don't know why we still haven't removed it from the app, we really should've.

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/cc441804%28v=vs.85%29.aspx

What does prep_net_backups indicate?


This is an option used during at-launch initialization. When enabled (which is the default), it adds all network locations (including mapped drives) of all enabled real-time and periodic backups to the network monitor module.

In particular, this supplies the monitor with share access credentials (as configured in the backup settings), so it will connect shares if they aren't yet accessible.

Network monitor periodically checks all shares on its list and when one becomes accessible, it pokes respective backups and they advance from "expecting a device" to "ready" state.

If prep_net_backups is OFF, then backups will be put into "ready" state directly. Then, if they run and there's remote shares are not accessible, the runs will fail.

Is there a document that describes the keys both in the main config and the profile config?


Not yet. Mostly due to not being ready to commit to the INI structure.

conf.scanning.cap_memory_use - what happens if it is set to false?


Swapping version of the tree scanner is disabled and a fully in-memory version of it is used instead. With 32 Gigs of RAM you most certainly will want to try this.

Just make sure to *exit the app* before editing any INI files. They are read only on launch and overwritten on exit.

adrian_green :

Aug 25, 2016

Really useful info.
I have set conf.scanning.cap_memory_use to 0 and monitored the service (using Procexp) for a while.
This is a production machine so I'm unwilling to experiment any further, but I'm seeing a marked performance improvement overall.
Memory WS balloons to about 3G.  In this case there is only one job.

I feel that an autotuning system might be possible using a memory pressure reading (per job) to minimize swap.  Does that sound possible?

I likely don't know what I'm talking about, but why not speculate...  :-)

Alex Pankratov :

Aug 25, 2016

To be completely honest, I really don't like overly smart software.

It tends to sweep simpler problems under the carpet only to replace them with more contrived ones.

For every case when this sort of dynamic RAM capping will work, there will be a case when it will kick in exactly when one doesn't care about RAM usage, but just wants for a backup to complete ASAP.

In your case you should just do something like

    swap.max_pages    64
    swap.page_size       8388608

to limit in-memory swap cache to half-gig and the perhaps set

    trim.scanner.f.at     10000
    trim.scanner.f.to     9000

    trim.scanner.d.at     10000
    trim.scanner.d.to     9000

    trim.planner.at         10000
    trim.planner.to         9000

These (at, to) pairs control the size of object cache for files (f), directories (d) and steps in a backup plan. Once cache reaches [at] number of items, the least recently used ones are trimmed until the cache is reduced to [to] items.

When an item is trimmed from the object cache, its blob is pushed down to a page in a swap cache, and the page is flushed to the disk once total swap page count goes over swap.max_pages.

I mean... look at this ^ ... this is reasonably simple and predictable.

Now let's say these limits become dynamic, because we impose a limit on their cumulative size across all active jobs. Yes, this will keep overall RAM usage in check, but then the performance is likely going to get all moody - both because of the change in the swapping behavior and because running several backups in parallel will saturate the hell out of IO pipeline. If you have something else running in parallel (and needed the RAM), then its performance is likely to suffer as well.

All in all, if you find yourself in need of dynamic RAM thresholds, then the underlying cause is likely that your backup jobs are simply not planned correctly. Fix that and the RAM will appear where needed.

adrian_green :

Aug 26, 2016

Yes, I've been the hapless victim of software dogpiling itself to save itself from OOM events.
Your argument is convincing and your configuration example makes a lot of sense.  
It's like I'm extracting "The Manual" bit by bit! :-)
I'm going to leave it uncapped (for now) as I have plenty of headroom.
I hope this explanation you've provided is helpful for others.

Thankyou!

Alex Pankratov :

Sep 01, 2016

Yeah. OOM... I know this abbreviation. It did some things to my production servers. Gruesome, unspeakable things. I don't like OOM.

New topic

Create
Made by IO Bureau in Switzerland
Support

Updates
Blog / RSS
Follow Twitter
Reddit
Miscellanea Press kit
Testimonials
Company Imprint

Legal Terms
Privacy