The Object Storage Repository cannot be used on its own and has to be configured as Capacity Tier in a Scale-out Backup Repository.
Do not configure any tiering or lifecycle rules on object storage buckets used for Veeam Object Storage Repositories. They are unsupported.
And here is why:
- Tiering and lifecycle rules in object storages are based on object age. However, with Veeam’s implementation even a very old object (which refers to a Veeam block) could still be relevant for the latest offloaded backup file when the block was not changed between the restore points. An object storage vendor can not know which blocks are still relevant and which not and thus can not make proper tiering decisions.
- The vendor APIs for the different storage products are not transparent. E.g. accessing Amazon S3 or Amazon Glacier requires the use of different APIs. When tiering/lifecycle management is done on object storage side, Veeam services are most likely not aware of that and cannot decide on how to access the tiered objects.
When using public cloud object storage always consider all costs.
Putting and modifying data in object storage requires API calls like PUT, COPY or LIST. These calls are often priced based on thousand or ten thousand calls.
For data at rest the used resources are normally priced by GB/month.
Do never forget prices for restores. These prices do include API requests (GET) but also the egress traffic cost from the cloud datacenter which can be immense depending on how much data is required to be pulled from the cloud. Veeam tries to leverage the available data blocks in the Performance Tier to reduce costs, but blocks which are not present have to be downloaded by requesting the related objects.
In general, the API costs are only a fraction of the costs of storing or retrieving data during daily operations and thus not too much effort should be put into API cost estimation.