Link Search Menu Expand Document

Veeam Backup Repository Design

Best Practice

  • Ensure you comply with the 3-2-1 rule
  • Physical Repositories are recommended where possible (ideally combined with proxy role using backup from storage snapshots).
  • Calculate 1 repository core per 1.5 proxy cores
  • Calculate 4 GB RAM per repository CPU core
  • The recommended minimum for a repository is 2 cores and 8 GB RAM

Repository Server Placement

The Veeam Backup Repository can be located wherever the environment allows it. The most common design has a primary backup on-site and a backup copy off-site.

The key recommendation is to follow the 3-2-1 rule.

  • 3 copies of the data (Production, Backup & Backup-Copy)
  • 2 different media
  • 1 copy off-site

Repository Server Sizing

At the repository a task slot correlates to a concurrently processed disk which would have been calculated as part of the proxy sizing. See Backup Proxy Design for details.

The quantity of cores required can be calculated by dividing the proxy core count by 3. For example:

Proxy: 20 cores / 3 = 7 cores (rounded up)

The calculation applies to both Per-Job and Per-Machine backup files. The reason for the 3 value is that the CPU consuming compression is done on the proxy side. That allows more than a single task per-core at the repository.

To calculate the required RAM, take the repository core-count and multiply with 4 GB. For example:

7 cores * 4 = 28GB RAM

Always use 64 KB block allocation size. For ReFS based file systems the recommendation is to add 0.5 GB RAM per TB of ReFS storage. However, you don’t have to scale this indefinitely. 128GB of RAM are often sufficient for task, OS and ReFS requirements if the total ReFS volume size of the server is below than ~200 TB. Depending on the ReFS size or task requirements you may want to add up more memory but there should be no need to go over 256 GB.

A word of caution about ReFS file systems and thin provisioned LUNs: ReFS does not support trim, unless on storage spaces, so space reclamation will not take place. For this reason it’s recommended to use thick provisioned LUNs with ReFS.

Please see the full list of operations that consume task slots at the bottom of this page.

XFS Considerations

XFS Data Block Sharing (Reflink) provides the same benefits as ReFS in terms of speed and space consumption. Veeam leverages it to implement the Fast Clone functionality. Since all transformation tasks are done via metadata operations, synthetic full backups get a huge performance boost and they don’t take up any additional capacity.

The Linux implementation of XFS is limited to a maximum of 4K for the block size; this shouldn’t be an issue as the filesystem size can go up to 1PB and performance is not affected by the small block size. It also does not have any impact on Veeam’s RAID stripe size recommendation, because the filesystem block size is just how granularly the filesystem tracks block allocation.

Using LVM with XFS is fine if you need more flexibility for volume extension.

You can also consider XFS/Fast Clone if your data is encrypted, as Veeam will know which metadata/datablocks inside the encrypted backup-files correspond to which source-datablocks.

One known issue that requires attention is fragmentation of the XFS filesystem, which is inevitable as clone operations are executed regularly. Fragmentation affects negatively read performance, and even those reads that would normally be sequential will basically become random operations. XFS has its own ways to mitigate fragmentation, like creating speculative preallocations when copy on write, but in order to keep performance from degrading it’s important to have an array with a good random read performance, or even better use SSDs. There are tools also to defragment XFS, but they should be used with caution as it takes a long time for the procedure to complete and they generate a heavy performance hit.

Physical or Virtual?

In general, we recommend whenever possible to use physical machines as repositories, in order to maximize performance and have a clear separation between the production environment that needs to be protected and the backup storage. It is also recommended combining this with the proxy role if backup from Storage Snapshots is possible. This keeps overheads on the virtual environment and network to a minimum.

Virtual Repository Considerations

If you choose to use a virtual machine as a repository or gateway server, keep in mind that the storage, the associated transport media, and the network will be heavily occupied.

Best practice is to avoid using the same storage for backups and production. The loss of this single system would lead to the loss of both copies of the data, the production and the backups.

It is also recommended not to use VMFS based disks as if the disk was to be lost or corrupted, then all the backup data would also be lost as well. Veeam recommends in this case to use direct attached iSCSI as it can be reattached to another VM, or a physical system if required. FC via NPIV is not recommended.

The available network and storage bandwidth of the hypervisor needs to be considered when using a virtual repository. The reason for this is that the bandwidth from each proxy will converge on that ESXi host(s) where the repository runs.

Also take into account that multiple virtual repositories may be running on the same hypervisor host because of DRS (distributed resource scheduling). Consider using anti-affinity rules to keep repositories on different hosts to avoid network bottlenecks.

Repository Task slot use reference


Backup Type Tasks
VMware/Hyper-V One per disk
Nutanix One per VM
Veeam Agent for Windows One per disk
Veeam Agent for Linux One per machine

Merges and synthetic operations

Hypervisor Repository Setting Tasks
VMware/Hyper-V Per-Job One per Job
VMware/Hyper-V Per-Machine One per VM
Nutanix AHV Per-Job One per VM
Nutanix AHV Per-Machine One per VM

Backup Copies

Hypervisor Tasks
VMware/Hyper-V/ AHV One per VM

Capacity-Tier Offload

Type Tasks
Machines One task per backup chain offloading
Database plugins (v11) one plug-in job offloading

Enterprise Application Plugins

Type Tasks
SAP HANA Depends on the services
RMAN One Per-RMAN channel

At restore plug-ins do not consume task slots as we cannot control how many restore streams are started by design of the backup applications.

Log backups

Type Tasks
Log Shipping (backup) One task per machine
backup copy job one task per machine


Table of contents

Back to top

Copyright © 2019-2021 Solutions Architects, Veeam Software.