Object Storage Repository
Object Storage Best Practices
Create own bucket/container for each repository with own user and restrictive ACL
General security best practices apply and creating separate buckets/containers and credentials for each is as such.
Separate buckets/containers will also make sure to keep the failure domains smaller and allow for easier implementation of different retention settings.
Refer to the object storage provider and check security best practices for proper setup of containers/buckets.
Cache Repository
When you’re using an Object Storage repository, you need to configure a local cache folder. This cache folder is used to store temporary meta data to reduce the amount of storage transaction targeting your Object Storage. Provide disk space for ~1% of your source data size as local cache with a maximum of 100GB.** In case of restores this local cached meta data reduces the need for 90% of the required API calls to the object storage (which might increase cost on public cloud). Read more detailed information in the sizing section for Object Storage.
Increase cores for Archiver Appliance for multiple Backup Copy Jobs.
When you run more than one Backup Copy Job at the same time, you should change your Archiver Appliance VM size from the default to one with more cores to have better performance.
Do not setup tiering policies on the object storage side. This is not supported and will break things.
Many object storage providers provide the possibility to configure tiering policies to move older objects from a high-cost/low-latency object storage to a low-cost/high-latency object storage, e.g. Amazon Glacier or Microsoft Cool Blob.
These native tiering policies must not be configured because:
- The archive tiers do use other APIs than the “normal” object storage. Data which is moved is not accessible anymore for VB365.
- Even old objects can still be part of the latest restore point. Think about a one-year-old email which is still in your inbox because it contains evergreen information. By tiering items older than a year to an archive tier, this mail would be inaccessible for Veeam while it is still expected to be part of yesterday’s backup when a restore kicks in.
Use Backup Copy Jobs to leverage lower cost object storage tiers for longer retentions.
Do not use third party tools to do Objects Storage bucket replication/migrations/Backup and Restore.
Veeam Backup for Microsoft 365 relies a lot on the meta data of objects stored on Object Storage. It’s not supported to use external tooling to move or migrate data between different Object Storage providers or solutions. Also the usage of Veeam Backup and Replication for backup and restores of the bucket/container used by Veeam Backup for Microsoft 365 is not supported
Do not manually delete anything from the object storage bucket/container, unless you don’t want to use the repository anymore (then you can delete everything)
Interfering with the Veeam managed retention of objects in the repository can break the consistency of backups and thus must be avoided.
External Resources