Choosing the right Veeam proxy server design for your environment gives you much control over the impact on the vSphere infrastructure and the backup traffic flow. Proxies are the work horses and are critical components to achieve good backup and restore speeds.
When thinking about proxy design, you have to be familiar with the different Transport Modes to understand limitations, requirements, etc. for proxy placement and design.
Based on your chosen transport mode you might require virtual proxies (Hot-Add also known as Virtual Appliance Mode) or physical proxies (Direct SAN Access via iSCSI or FC/Backup from Storage Snapshots).
It is recommended to have the proxy server as close to the source data as possible with a high bandwidth connection. The traffic from the source to the proxy is not yet optimized (no compression or deduplication whatsoever), meaning that 100% of the backup data will be transferred over this link.
Also, consider a good connection between proxy and repository. Optimized data (normally ~50% of the source data size after compression and deduplication) will be transferred here.
We recommend the latest supported version of Windows Server OS or supported Linux OS for all proxies.
We see almost no performance differences between Windows and Linux proxies. So, the OS decision can be based upon other design criterias, for example licensing.
Getting the right amount of processing power (compute resources) is essential to achieving the RTO/RPO defined by the business. In this section, we will outline the recommendations to follow for appropriate sizing.
Proxies do have multiple task slots to process VM source data. It is best practice to plan for one physical core or one vCPU and 2 GB of RAM for each configured proxy task.
A task processes 1 VM disk at a time and CPU/RAM resources are used for inline data reduction and encryption.
Please consider requirements from User Guide as minimum requirements. Using the above mentioned recommendations allow for growth and additional inline processing features or other special job settings that increase RAM consumption.
If the proxy server is used for other roles like Gateway Server for SMB shares, EMC DataDomain DDBoost, HPE StoreOnce Catalyst, or if you run the backup repository on the server, remember stacking system requirements for all the different components. This means you should provide enough resources for every role assigned to this server, in addition to proxy role resources. Please see related chapters for each components for further details.
Tip: Doubling the proxy server task count will — in general — reduce the backup window by 2x.
Depending on the infrastructure and source storage performance, these numbers may turn out being too conservative. We recommend performing a POC to examine the specific numbers for the environment.
D = Source data in MB W = Backup window in seconds T = Throughput in MB/s = D/W CR = Change rate CF = Cores required for full backup = T/100 CI = Cores required for incremental backup = (T * CR)/25
Our sample infrastructure has the following characteristics:
- 1,000 VMs
- 100TB of consumed storage
- Eight hours backup window
- 10% change rate
By inserting these numbers into the equations above, we get the following results.
D = 100 TB * 1024 * 1024 = 104,857,600 MB W = 8 hours * 3600 seconds = 28,800 seconds T = 104857600/28800 = 3,641 MB/s
We use the average throughput to predict how many cores are required to meet the defined SLA when we run a full backup:
CF = T/100 ~ 36 cores
The equation is modified to account for decreased performance for incremental backups with the following result:
CI = (T * CR)/25 ~ 14 cores
As seen above, incremental backups typically have lower compute requirements on the proxy servers in compare with full backups.
Considering each task consumes up to 2GB RAM, we get the following result:
36 cores and 72 GBRAM for full backups
- For a physical server, it is recommended to to provision at least 18 CPU cores per proxy sever. Two physical servers are recommended in this example to avoid a single point of failure.
- For virtual proxy servers, it is recommended to configure multiple proxies with maximum 8 vCPUs to avoid co-stop scheduling issues. Five virtual proxy servers are required in this example.
If we instead size the proxy servers only for incremental backups rather than full backups, we can predict the time required to complete the full backups with less compute resources
WS = D/(CI*100) W = WS/3600
By following previous example, we could calculate how much time is required to complete a full backup, but using proxy servers sized for incremental backups.
WS = 104857600/(14 * 100) W = WS/3600 ~ 21 hours
As seen above, if we size our proxy servers only for incremental backups, every time we run a full backup the job will exceed the defined backup windows and, in this case, the full backup will take 21 hours instead of the eight hours defined in SLA.
If the business can accept this increased backup window for periodical full backups, it is possible to lower the compute requirement by more than 2x and get the following result:
14 cores and 28GB RAM for incremental backups
If you need to achieve a 2x smaller backup window (four hours), then you may double the resources - 2x the amount of compute power (split across multiple servers). The same rule applies if the change rate is 2x higher (20% change rate). To process a 2x increase in amount of changed data, it is also required to double the proxy resources.
Note: Performance largely depends on the underlying storage and network infrastructure.
Overall, required CPU and RAM resources utilized by backup and replication jobs are typically below 5% (and in many cases below 3%) of all virtualization resources.
Typically, in a virtual environment, proxy servers use four, six or eight vCPUs. Physical proxies can be configured based on price / value or (Windows) licensing requirements. That means up to 8 tasks for virtual proxy and many more tasks for physical proxies (32, 48 or even more than 64).
Note: Parallel processing may also be limited by max concurrent tasks at the repository level.
So, in a virtual-only environment you will have slightly more proxies with a smaller proxy task slot count, while in a physical infrastructure with good storage connection you will have a very high parallel proxy task count per proxy.
The “sweet spot” in a physical environment is about 20 processing tasks on a 20 CPU cores proxy server with 48GB RAM, two 16/32G FC cards for reading data (Direct SAN Access), plus one or two 10GbE network cards.
Depending on the primary storage system and backup target storage system, any of the following methods can be recommended to reach the best backup performance:
Running fewer proxy tasks with a higher throughput per current proxy task
Running higher proxy task count with less throughput per task
As performance depends on multiple factors like storage load, connection, firmware level, RAID configuration, access methods and more, it is recommended to do a proof of concept to define optimal configuration and the best possible processing mode.
We have already discussed about how to size the resources required for proxy servers. We also have discussed about how many tasks we should have per proxy (this largely depends on the chose of using virtual or physical proxy servers). In addition, it’s also important to decide how many proxy servers should be deployed considering other criterias.
It’s not enough to know how much resources we need for the proxy role. Availability is also important, as the failure of the proxy server could prevent the jobs to run properly, or even worse, could prevent us from restoring our data. Because of this, it isn’t recommended to have a single proxy server with all resources required. By having a single proxy server, we basically have a Single Point of Failure (SPOF).
It’s recommended to deploy at least two proxy servers per site, in order to provide a minumum of availability for this role. Of course, the number of proxy servers also depends on the processing resources and tasks per proxy as discussed above. By following previous example, we need 36 CPU cores and 72GB of RAM to complete a full backup in eight hours. Using that information, we could decide to have physical or virtual proxies:
Physical proxy servers
Physical proxy servers are recommended when we want to use Direct SAN Access via iSCSI or FC, or when we want to use backup from storage snapshots. In this case we would need.
Two Physical Proxy Servers, each one with - Two CPU sockets with 10 cores each (20 cores in total) - 48GB of RAM
Virtual proxy servers
Virtual proxy servers are recommended when we have VSAN datastores, or when we have iSCSI/FC storage, but we don’t have resources/budget to deploy a physical proxy with Direct SAN Accesss. A virtual proxy server will use Hot-Add mode, also known as Virtual Appliance Mode.
Note: Hot-Add mode isn’t recommended for NFS Datastores. In that scenario, we recommend using Direct NFS Access.
For the same scenario discussed above, we would need:
Five Physical Proxy Servers, each one with - Eight vCPUs - 16GB of RAM
Note Please consider these recommendations for proxy design are just that, recommendations. You could use different CPU/memory configurations, as long as you follow the recommendations for proxy sizing, tasks per proxy and availability.
It’s also recommended to consider the resources you will need for restore operations, as some of them are also going to require proxy resources to run. Consider also that when you are running all your jobs, and all proxy resources are busy processing those jobs, you won’t have resources to run, for instance, a Full VM Restore unless you have spare resources for these operations.
Please remember that replication jobs are also going to require proxy resources at source and target sites. These resources are in addition to the resources required for backup jobs. Please consider the resources required for both backup and replication jobs in case you want to run both kind of jobs within the same backup window.
- Transport Modes
- Veeam Backup Proxy
- Limitation of Concurrent Tasks
- Proxy Requirements
- Build - vSphere Proxy