The support forum

Max number of jobs that can/should be configured?

martbasi :

Jun 07, 2019


I was just curious what is (or should be considered) the maximum number of backup jobs that can be configured? I know this is likely hardware dependent, but still would like to get some thoughts on the topic. Is there perhaps a way to estimate how jobs scale in terms of resource (cpu/ram) demands?

Alex Pankratov :

Jun 07, 2019

If they are going to be run one at a time, there's really no limit except perhaps with things getting hard to manage in the UI, because of the list getting too long and some such.

If they are to be run concurrently, then the rule of thumb is that it's best to arrange them so that no two jobs are using the same device at the same time. This can be done with custom queues (see FAQ). That said, NVMe devices can generally be shared because they _are_ designed for massive parallelism.

Once jobs start sharing devices, the performance of each job will degrade, and I suspect that you will see significant _cumulative_ performance drop with as little as 3-4 competing jobs for HDDs and perhaps a bit more than that for SSDs.

The only way to tell for sure is to test.

martbasi :

Jun 07, 2019

Thanks, the queuing idea is definitely something to explore. Nice feature.
For anyone searching in the future:
"Grouping jobs into scheduling queues"

In my scenario I have VMs connected to SANs via 10G (hybrid storage, HDD arrays but with SSD caches), hopefully plenty of performance for my purposes ... onwards to the testing then :)

New topic

Made by IO Bureau in Switzerland

Blog / RSS
Follow Twitter
Miscellanea Press kit
Company Imprint

Legal Terms