Alex Pankratov :
Jun 14, 2013
A crash condition courtesy of @ckasprzak that was triggered by cancelling delta copy immediately after it was started.
An infinite loop condition whereby the backup will repeatedly try to copy the same file - this is courtesy of @pongo and it is related to the "use vss as needed" option.
An issue with not being able to switch to the service mode.
An issue with not being able to update a running instance of Bvckup.
Modified app mode switching code to work significantly faster on Vista+. Switching involves moving backup configuration file *and* accumulated delta state from one directory to another. The original version did it by copying the directory and this could get slow if the backups involved had a lot of delta state. It could literally take well over a minute. In theory, the config directory could simply be _moved_, but Windows provides no guarantees that the move operation would either complete in full or won't complete at all. It is not "atomic". However, in Vista they added "transacted" variety of the move and that's just it - an atomic move. So for Vista+ installs and NTFS disks Bvckup now utilizes this transacted move when switching the app mode. Otherwise it falls back to the copying method.
"VSS as needed" is now a default option.
"Update/Create" and "Cancel" were swapped in the Backup Job dialog.
The window caption now appended with "(Administrator)" and "(Running as Service)" when running as an admin or a service respectively.
Reworded Copy What and Deletion sections of the backup configuration dialog. Again.
Added an option of archiving backup copied of deleted items. This is a big one as I think this is how the deletion should ideally be handled.
Two existing options - Keep and Delete are too black and white. "Keep" has a problem of swelling the backup, especially if there are temporary files involved. Like those in a AppData\Temp or in a browser cache. It also creates problems when a name of a deleted file is recycled as a directory name and vice verse. "Delete" is obviously just a bit too drastic for real-time backups, though it's a sensible default for daily and weekly backups.
In any case, the third new option (and a new default) is to move deleted files into a directory called "$Archive of Deleted Items (Bvckup)" at the top of the destination folder. The directory name is configurable and I am open to a better default.
When a file or a folder is archived, its name is appended with a timestamp in a form of " (deleted on yyyy-mm-dd at hh-mm)", where y/m/d/h/m are year/month/day/hour/minute respectively. The tag is configurable, I will document its format this later.
After item's name is appended, it is moved in the exact same directory in $Archive as it uses at destination. For example, if the backup is from C:\Temp to X:\Backup and we are deleting C:\Temp\Foo\bar.txt, then it ends up being
X:\Backup\$Archive\Foo\bar (deleted on ...).txt
Note how Foo's name is *not* timestamped, only bar's is.
To continue with an example, let's say C:\Temp\Foo is deleted next. It will cause Foo's backup copy to be moved to
X:\Backup\$Archive\Foo (deleted on...)
So, just to re-iterate - only deleted item's name is time-stamped, not the names of its parent directories. This approach helps avoiding any name conflicts (e.g. when the same item is deleted more than once) *and* it helps keeping the $Archive sensible in a common case when we are routinely deleting files, but keeping the folder structure intact.
A closing note is to say that $Archive is not automatically trimmed or supervised in any other way. I think adding an auto-trimming option is a good idea, but I need some ideas as to what the criteria should be (age based? space used going over a threshold?)
And, that's it. Thanks for reading. Have a good weekend, everyone!
Alex Pankratov :
Jun 14, 2013
Oh, and one other thing.
I looked at the memory use issue with (very) large backups and tracked it down to the logging module. The logs themselves aren't that big, certainly not worth of hundreds of megs. It's how they are generated.
What happens is that logs are created as a backup progresses, so the memory blocks used for storing individual log messages end up being interleaved with the memory used for the backup needs. When the backup finishes, all backup-related blocks are freed, so the formal memory usage drops down to like 20-30 megs with only several MBs used by the logs. However program's "working set" remain large because every allocated memory page is sprinkled with a bit of logging data and so it is still pinned in the process memory.
If at this point I clear all in-memory logs, then the Task Manager reported memory usage also drops to 20-30 M range.
This is good news. It means that (a) the memory consumption *can* be brought down to a very reasonable range (b) it should be easy to do by simply clearing and rebuilding in-memory logs every now and then. I will try and get this into the next release and we'll go from there.
Jun 16, 2013
I tried to update (from 15 to 16) Server 2011 but failed - it gave a error and think it sent you the error in the back ground, if not let me know what files you need,
I did run it again a second time and it worked ok with no errors and restarted the service ok
Alex Pankratov :
Jun 17, 2013
@MrG - Yep, I got the error report, thanks.