We should consider following while building the backup repository:
- Write performance
- Read performance
- Data density
- Backup file utilization
As a basic guideline, a repository should be highly resilient, since it is hosting customer data. It also needs to be scalable, allowing the backup to grow as needed.
Organization policies may require different storage types for backups with different retention. In such scenarios, you may configure two backup repositories:
- A high-performance repository hosting several recent retention points for instant restores and other quick operations
- A repository with more capacity, but using a cheaper and slower storage, storing long-term retention points
You can consume both layers by setting up a backup copy job from the first to the second repository, or leverage Scale-out Backup Repository.
It is possible to write one backup file chain per each VM on a repository, compared to the regular chain holding data for all the VMs of a given job. This option greatly eases job management, allowing to create jobs containing much more VMs than jobs with single chains, and also enhances performance thanks to more simultaneous write streams towards a repository, even when running a single job.
In addition to optimizing write performance with additional streams to multiple files, there are other positive side effects as well. When using the forward incremental forever backup mode, you may experience improved merge performance. When backup file compacting is enabled, per VM backup files require less free space: instead of requiring sufficient space to temporarily accommodate an additional entire full backup file, only free space equivalent to the largest backup file in the job is required. Parallel processing to tape will also have increased performance, as multiple files can be written to separate tape devices simultaneously.
Per-machine backup files is an advanced option available for backup repositories, and it is disabled by default for new simple backup repositories. When enabling the option, an active full backup is necessary for existing jobs to apply the setting.
Using per-machine backup files leads to a small increase in disk space usage since Veeam deduplication is per machine only with that setting.
** NOTE: In Scale-Out Backup Repositories, Per-VM backup files option is ENABLED by default **
Start with configuring three concurrent tasks per CPU core and adjust based on load of the server, storage and network.
Dependent on the used storage too many write threads to a storage might be counter productive. For example, a low range NAS storage will probably not react badly to a high amount of parallel write processes created by per-machine backup files: to mitigate these effects it’s better to limit the concurrent tasks in this case.
** Note: Consider tasks for read operations on backup repositories (like backup copy jobs). **
Best practice is to keep the backup chain size (sum of a single full and linked incrementals) under 10 TB (~20TB of source data).
Remember that very big objects can become hardly manageable. Since Veeam allows a backup chain to be moved from one repository to another with nothing more than a copy/paste operation of the files themselves, it is recommended not to exceed the recommended file sizes. This will allow for a smooth, simple and effortless repository storage migration and better storage-use distribution in SOBRs.
Per-Machine backup files do help here, as the size of the chain is only dependent on the size of single VMs.
Table of contents
- Block Repositories
- Scale-Out Repositories
- Object Repositories
- NAS Repositories
- Dedup Appliances
- Data Domain
- Veeam Ready