OpenStack is more than virtualization, so what?

There is a lot of misunderstanding around OpenStack. Some people equate it with a virtualization management platform, while its capabilities are far more significant.

OpenStack enables the creation of public and private clouds for both small businesses and corporations. It operates over 100 public cloud data centers and thousands of private clouds with over 25 million computing cores. OpenStack is used to deploy virtual machines and other instances that handle various tasks related to managing a cloud environment. It is worth noting that the OpenStack and virtualization management platforms are based on virtualized resources and can detect, report, and automate processes across multi-vendor environments.

However, while virtualization management platforms make it easy to manipulate the functions of virtual resources, OpenStack uses virtual resources to run combinations of tools. These tools create a cloud environment that meets the National Institute of Standards and Technology’s five criteria for cloud computing: network, connected resources, user interface, sharing capabilities, and automatic control/allocation of resources.

The cloud and virtualization

It is not uncommon to confuse the cloud with virtualization. The two solutions appear similar at first glance. The same is true of their role, which is to create useful environments from dedicated resources. But, as is usually the case, the devil is in the detail. Virtualization enables the creation of multiple simulated environments or dedicated resources from a single physical hardware system. On the other hand, clouds are IT environments that isolate, connect, and share scalable resources across a network. To put it more simply, virtualization is a technology where the cloud is the environment.

Cloud infrastructure refers to hardware and software components such as servers, storage, networks, virtualization software, services, and management tools that support the computing requirements of the cloud computing model. In addition, it includes an abstraction layer that virtualizes and logically presents resources and services to users through application programming interfaces and command-line interfaces or API-enabled graphical interfaces.

If an organization has access to an intranet, the internet, or both, virtualization can then be used to create clouds, although this is not the only option. In the case of virtualization, software called a hypervisor sits on top of the physical hardware and carves out resources that are then made available to virtual machines. These resources can be computing power, storage, or cloud applications containing all the code of the execution environment and the resources required to implement it.

If the process ends at this stage, we are only dealing with virtualization. Virtual resources must be allocated to centralized pools before they can be called clouds. Adding a management software layer provides administrative control over the infrastructure, platforms, applications, and data that will be used in the cloud. An automation layer is added to replace or reduce human interaction with repetitive instructions and processes.

Comparison table:

The role of OpenStack in the environment

Virtualization provides redundancy and high availability built into the infrastructure, but the increasing capacity to improve performance is a time-consuming process. Achieving higher performance involves scaling by adding more memory and processors. However, the possibilities are limited by the hardware. You can only add as much as the maximum capacity of the physical server allows.

On the other hand, cloud computing shifts the focus from using hardware to consuming shared resources as a service. OpenStack is used to create private and public clouds, i.e., the provision of consumer services, not the hardware itself.

Virtualization has been around for many years and offers detailed reference architectures and typical practices. On the other hand, OpenStack provides a great deal of flexibility, but this comes at a price – the responsibility for setting specific tasks or activities rests on the shoulders of the person designing the environment.

In addition, supporting OpenStack also requires a different IT infrastructure philosophy involving DevOps: a team of operations engineers and developers working together from design through the development process to production support. The goal of DevOps is to create a culture and environment where software development, testing, and release are faster and more reliable.

It is worth mentioning at this point that OpenStack has become one of the most actively developed open-source projects in the world, with around 130 changes being made daily.

OpenStack and choice of virtualizer

People new to OpenStack can easily get lost due to the multitude of components and their variety of settings. To some extent, this is due to updates being issued with great frequency. Although the platform is just over eleven years old, it has already seen 25 releases, and the latest version called Xena saw the light of day at the end of October. OpenStack consists of nine key components (Nova, Swift, Cinder, Neutron, Horizon, Keystone, Heat, Telemetry, Glance).

OpenStack is a single infrastructure platform for deploying various architectures: bare metal, virtual machines (VMs), graphics processing units (GPUs), or containers. An important role is played here by the Nova (Compute) component, which provides management of virtual machines, from their creation on the hypervisor, through their management, to their deletion. It also enables physical machines to be managed by the Ironic service (bare metal) and, to some extent, containers.

Nova can use several hypervisor technologies, although most installations use one. However, using ComputeFilter and ImagePropertiesFilter, you can schedule the deployment of different hypervisors within the same installation.

Which hypervisor to use with OpenStack?

According to a survey conducted last year by OpenStack User, as many as 94 percent of users of this platform use KVM. This is most often due to the fact that KVM is configured as the default hypervisor for Nova (Compute).

The second position is held by QEMU, which 25 percent of respondents in the survey mentioned. From a Compute service perspective, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled by libvirt, support the same feature set, and all KVM-compatible VM images are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for production deployment.

Next in the OpenStack User survey was the bare metal solution (11 percent). Unlike QEMU, it works very well in production environments. The hypervisors used in Openstack deployments are rounded off by LXC (5 percent). VMware ESX (5%), Open VZ (1%) Xen (1 per cent), Xen (1 per cent). In addition to these solutions, OpenStack is also compatible with UML and Virtuozzo.

As you can see, there is a lot of room for maneuver for OpenStack users in their choice of hypervisor. An important factor in choosing the right product is its use or experience of it in the current organization. The hypervisor’s functionality, documentation, and level of community experience also play an important role. However in the field of data protection for multiple hypervisors deployed in OpenStack infrastructure we recommend to check Storware solution.

Paweł Mączka Photo

text written by:

Pawel Maczka, CTO at Storware