Next maintenance release -
- will improve how the program estimates and displays completion time for longer file copies.
A brand new version of the copying speed-time estimator is in. It is far more sensitive to the changes in copying conditions while also producing estimates that are far less jittery.
New version also has a visually distinct mode when the program is correcting its estimate due to changes in the copying speed.
Significant copying speed-ups are not uncommon, but the slow-downs are even more routine. They happen few seconds into the copying, when the write cache fills in and the writing speed drops to the device's native value:
When this happens, the completion time gets pushed back and the UI reacts to this by highlighting the estimate in
. It also increases its rate of refresh to make changes more obvious.
The GIF at the top captures this in action with a backup that goes from an NVMe drive to an HDD.
The copying speed first spikes to 1600MB/s as the write cache is getting filled, but quickly drops to 100 MB/s, which is the native write speed of the HDD.
When a change in speed like this happens, the estimator enters a
state and starts to gradually adjust the
Once the copying speed stabilizes, the estimator switches to the
state and sticks to its most recent prediction. That is, it fixes the ETA and keeps it constant until further notice.
As the copying speed fluctuates, the effective ETA may go a bit up or down, but the estimator will ignore this jitter in favour of keeping the countdown steady.
This ensures that the countdown behaves like a
, without sudden stalls and accelerations like this:
The estimator also keeps monitoring effective ETA to make sure it doesn't drift too far away from the fixed estimate. If it does, then the estimator initiates a new correction and the process repeats itself.
So there you have it - our new and shiny
. Coming up in Release 80.9.
A broader context here is that even the fastest storage devices will complete the IO unevenly. As a result the raw measurements that go into an estimator will always be
and feeding them into an estimator, as-is, will produce noisy estimates.
For example, here is (an otherwise truly excellent)
running a bulk copy and using an overly simplistic estimator:
This is more distracting than it is helpful.
estimator needs to de-noise the data and to smooth out the estimates. It also needs to make use of heuristics to further align the result with what a human would expect, e.g. to
re-adjust the ETA every chance it gets.
Long story short, calculating ETA is one of those things that look trivial on the surface, but in practice it fronts quite a bit of depth and complexity. Just something to keep in mind.
*/ ?> /* ?>