Alex Pankratov :
Jan 30, 2019
Is there any setting I can use, or process steps I can take, to avoid that having to happen each time?
No, there's no setting to allow resuming destination scan after previous run is cancelled or aborted.
However if it goes through once, there is an option to cache the file index and then use it on the next run without rescanning destination anew. This is On by default and it's in the "Detecting Changes" section of backup settings.
That said, your file count is quite modest and the scan should not be taking *hours*. Scanning a location merely builds a list of files, it doesn't actually access or look at files themselves. Scanning, say, C:\Windows (which has few 100K files) usually takes few seconds tops on a local SSD drive. Similar count on a slower NAS over a not-so-good network connection may be in the order of minutes.
As per usual, I'd first check that there's no antivirus interference, i.e. that no software that might be policing file system requests is on the path between bvckup2 and the file system.
Second, look in this section of a backup log -
2019.01.30 02:50:00 Running the backup ...
2019.01.30 02:50:00 Preparing ...
2019.01.30 02:50:04 Scanning destination ...
2019.01.30 02:50:04 Setup
2019.01.30 02:50:04 Using 8 threads ...
Check that it is using several threads for scanning. You may try and vary thread count, including using just 1 thread in case your virtualization layers do something stupid like using single global lock to synchronize all access (not likely though).
All in all though, it's just that file index access in your setup is just abnormally *excruciatingly* slow. I haven't seen anything quite like this and I've seen my share of weird things :)