Running an extended set of tests to fine-tune the performance of the Bvckup 2 bulk copier.

It's an inherently simple matter of determining an optimal size and count of read/write buffers used to ferry data from source to destination. There are however ... erm ... details.

A source can be local or it can be over-the-network. There are slow local devices (e.g. a USB2 drive) and there are fast local devices. There is a warm, there is a cold cache and there are files that are cached only partially.

The writes can go out in lots of smaller chunks, all in parallel, or with a smaller number of large buffers. Again, it can go to a local device or over the network. The app can also explicitly flush all buffers after copying or leave flushing to the system.

The above graph is the I/O throughput of a 0.5 GB file being repeatedly copied from the local HDD to Synology Diskstation over a Gigabit wire. What changes between the runs is the buffering strategy and as you may notice there is a difference.

Still need to run several over-nighters to gather more statistics, but there is already a couple of things that can be changed in the copying code to make it run faster.

Scheduled to appear in the Release 72, coming up shortly.
Made by Pipemetrics in Switzerland
Support


Follow
Twitter
Blog
Miscellanea Press resources
Testimonials
On robocopy
Company
Imprint

Legal Terms
Privacy