Finished a piece of code that determines effective timestamp resolution of a file system.
Every file system keeps track of when a file was created and when it was last modified. These timestamps are the natural properties of a file and they are readily available for inspection in the Windows Explorer or any file manager of your choice.
The less obvious aspect of timestamping is that timestamps have
. For example, FAT tracks file modification with a measly 2 second precision. In comparison, NTFS uses the 100 ns (nano-second) precision.
It is very typical for a backup software to rely on timestamps to determine if a file has been modified and requires copying. But if the source file sits on an NTFS volume and the backup goes onto a FAT disk, then comparing timestamp directly simply won't work, because FAT timestamp will be rounded-up to a 2-second mark.
In other words, the timestamp granularity needs to be taken into an account when comparing the timestamps. The question is how to determine what it is exactly.
First of all, the granularity for creation and modification times can be different. FAT has them at 10 ms and 2 s respectively, but NTFS has them both at 100 ns.
To complicate matters, there appears to be NAS devices that look like NTFS boxes, but with the granularity that is
So what I did is added code to probe the file system and determine effective resolution for both timestamps. This involves dropping a temporary file and then trying to set its timestamps to this, that and 3rd and see what they end up at. From that it's possible to deduce the resolution.
In cases when such probing fails, the app falls back to guessing resolution by the file system name. It is also possible to override the resolution values from the config, just in case.