en

Backup strategies for Ceph

While Ceph has a built-in replication mechanism, it may not be possible for you to duplicate your infrastructure – backup is in this case essential

Ceph gains more and more popularity on the market. In highly-scalable OpenStack environments it has become a standard to provide Cinder volumes from Ceph RBD. Then Ceph has gradually been used more and more in a regular virtualization platform – RHV/oVirt are also able to use Ceph volumes with Cinder integration or in other solutions such as Proxmox VE or plain libvirt KVM. Ceph has its support even for Windows environments (iSCSI or CIFS gateways).

Storware Backup and Recovery (former vProtect) supports Ceph for libvirt environments (KVM), OpenStack (both in Disk-attachment and SSH Transfer methods) and Kubernetes/OpenShift (when used as a Persistent Volumes). In all of these cases you can describe export process as one of these two:

  • Direct disk attachment of the virtual disk via hypervisor abstraction layer + RBD snap-diff
  • RBD export + RBD-NBD

The latter also being used when you treat Ceph RBD as an independent Storage Provider (separately from the virtual environments).

Backup strategies for Ceph

1. Disk attachment

When Ceph RBD is used as a backend in virtual environment (such as OpenStack) – disk attachment method means that you use a Proxy VM, which requests from virtualization platform several operations: snapshot, volume creation and attachment of each volume to this proxy. In this scenario you can read data directly from the volume without any direct interaction with Ceph itself.

Things however become a bit more complex when you need to do an incremental backup. If there is no API in Hypervisor that tracks changed blocks – you will not be able to export data incrementally without re-reading whole block device.

In such case you need to leave last snapshot in Ceph and access Ceph RBD APIs to extract snapshot difference (between previous one and the one you have created during with next backup session). Note, that you still just need to list changed blocks via API, and you can read data directly (now you know which blocks you need to read) from the volume that has been attached by the hypervisor.

Worth to mention, that in this scenario Proxy VM needs network access both to the management network where hypervisor APIs are accessible and (if incremental backup is needed) to the Ceph cluster.

Disk attachment recovery process starts from a plain volume on the proxy VM and is reattached to the target VM once the data is copied on the proxy – no need to call Ceph directly in this case.

2. RBD export

Ceph also has build in mechanism to export RBD volumes directly. In cases, where backups are done from remote server (without installing any Proxy VM inside environment which is being protected) – you need to have similar approach to the Ceph RBD storage.

In this case you can use build-in RBD export mechanism to export data directly from Ceph monitors. Separately from API access to the hypervisors, you need to access Ceph cluster and export data from there. Also snapshot handling may in some cases be needed call Ceph directly (if virtualization platform doesn’t support snapshots on Ceph volumes).

RBD exports require network access to the Ceph cluster, so in both full and incremental backups it is necessary to have it.

And as I mentioned incremental backups – you have 2 options: you can either use export-diff feature from Ceph or you may mount snapshot over NBD (to have local block device to read from) and using snap-diff feature to extract list of blocks that you need to read from NBD device.

Recovery works exactly the opposite way – so there is build-in import for RBD volume. In this case you need a direct access to the Ceph as well.

3. Independent Ceph RBD Storage Provider

So far we have discussed Ceph in context of virtualization platform – which, on top of the data itself also requires VM metadata to be protected. However, there are cases where Ceph is either not supported by the data protection software when combined with specific virtualization platform or is used completely for other purposes – so no virtualization layer at all.

In such cases you can use Storware (vProtect) Storage Provider feature – which allows you to protect RBD volumes in similar fashion as VMs or Applications – just assign policies to the volumes (which also can be automated) and you can backup volumes to virtually any storage (file system, object or enterprise-grade backup platforms).

Recovery process also is simpler as Storware will import just individual volumes (regardless of the Ceph use case).

4. Ceph FS or RGW

And what about Ceph FS or RGW? Storware supports generic file-system storage provider, which means that you can mount Ceph FS or RGW (mounted as a file system over S3) to the Storware (vProtect) Node as a file system and define protection policies for such storage instances. Both full and incremental backups will work.

One important aspect to comment on is the way Storware stores data – in this case it creates an image from the file system that is being protected, so in backup destination you will see big image files rather than individual files that have been backed up.

Wrap up

In the past one had to decide which strategy to choose. But now SSH transfer offers most of the flexibility and efficiency without the need to install anything on the hypervisor.

In the previous Oracle virtualization platform, backup options were quite limited. However now, with brand new OLVM release, you can easily setup agent-less backup protection.

Marcin Kubacki

text written by:

Marcin Kubacki, CSA at Storware