en

How to Backup KVM Virtual Machines

If you have a plain KVM hypervisor managed by libvirt, then you still may need to implement an agent-less backup solution to protect the data. In this article, we briefly are going to describe a few possible scenarios.

Libvirt deployments differ a lot. It is hard to specify what is a “typical” KVM setup, so we’re going to describe a few basic scenarios of how the topic of data protection could be addressed for this platform.

The critical difference is the virtual machine storage. Not only because a wide range of storage options is supported, but also because in the VM metadata you can virtually point to any arbitrary file or device to point libvirt to use it as the storage.

While there is a concept of storage pools, which standardizes storage provisioning, we noticed that in general, either KVM is used as a part of a larger solution – such as OpenStack (which is a completely different topic) or the setup is completely up to the administrator and its imagination.

Let’s focus on several scenarios:

  • file-based storage using RAW/QCOW2 files
  • Ceph RBD based storage
  • LVM-based storage

All of them can be protected without the need to install any VM on the KVM host itself.

RAW/QCOW2 files

File-based storage is used in KVM deployments quite extensively. It is simple and offers several advantages – like thin provisioning and external snapshots.

The latter actually can be used for doing backups – when you want to create an external snapshot with libvirt – regardless if it is a RAW or QCOW2 file – the result is going to be a new QCOW2 delta file, where the current state of the VM is going to be written (an “active” snapshot).

Now, if you want to create an incremental backup, you just need to create a new snapshot and copy only the delta files starting from the last “active” snapshot but without the current “active” snapshot. So, the end result of the backup operation is actually a chain of QCOW2 files (RAW file can appear only as the base of such chain, as every external snapshot, even for RAW, is also a QCOW2 file). Such a chain can be consolidated if necessary.

In the newer libvirt versions, you can also use dirty-block maps. Now, this is the concept that can significantly reduce the snapshot burden, as you no longer need to rotate snapshots (which can be quite problematic if you need to migrate the VM or just want to remove them, which needs data to be merged).

You can create and store checkpoints in the QCOW2 files, which you can use to list the changes since the last checkpoint. So, instead of fetching delta files, you need to access the current state of the QCOW2 file and read only the blocks that were returned via the dirty-block map as the difference.

This mechanism is actually used by the oVirt, but it is covered with appropriate services. For a regular stand-alone host, you also need to transfer data via SSH or other methods to your backup repository.

Ceph RBD or other SDS

Ceph RBD is a quite popular Software Defined Storage solution and can easily be used even with a stand-alone KVM host. When using RBD volumes, you’ll have two advantages.

First, you no longer need to worry about snapshot chains and merging the data on the hypervisor side – as you only create snapshots directly in the Ceph, and they don’t depend on each other.

Secondly, the transfer can occur outside of the hypervisor – Ceph has a very efficient way to export blocks directly from the monitors, so you won’t affect the hypervisor’s performance.

“Changed Block Tracking” can also be easily implemented – instead of using dirty-block maps, you can leave the snapshot in the Ceph and request snapshot difference when you need to list changes between 2 points in time.

This approach should work for all kinds of other SDS solutions, but you just need to verify if the corresponding features are supported.

LVM and disk arrays

LVM, in general, is rather not used commonly in the KVM world, but you may want to implement this strategy using a disk array or similar. The idea is that you create a snapshot directly on the storage backend, and it doesn’t change any reference to the current state in the libvirt configuration.

Once it is done, you can read data directly from the snapshot. For disk arrays, you would need to expose the LUN somehow and read the data over SAN.

One thing that may need your attention is the space allocation – LVM may require to specify snapshot size so that the current state can be written. The maximum size of the snapshot can potentially be the size of the volume itself, so you need to make sure that you have enough space available.

Incremental backups could also be implemented using a rolling snapshot mechanism, but you should not expect ready mechanisms for CBT/snapshot difference. If it is available in your backend, then you should use it. If not, you would need to scan for changed blocks with every backup – while it saves space, it is a time-consuming process.

Other aspects

I think that I don’t have to mention that you also need VM metadata – libvirt provides everything as an XML, so you can easily dump it and later define a new VM based on its contents.

KVM also supports the option to freeze guest file systems during snapshot creation – it requires an agent to be installed in the guest VM, but it may be an important aspect of having application-consistent backups

Wrap up

Protecting the KVM VMs can be implemented in many different ways. In this article, we have covered several options depending on the storage type that is used for your VMs. Storware Backup & Recovery has supported backups for stand-alone KVM hypervisors since almost its beginning – this includes VMs running QCOW2/RAW files, Ceph RBD volumes, and LVM volumes. It also allows you to execute custom pre/post snapshot commands so you can quiesce your DB over SSH if necessary.

Marcin Kubacki

text written by:

Marcin Kubacki, CSA at Storware