Lead Image © Sternenhimmel, fotolia.com

Lead Image © Sternenhimmel, fotolia.com

OpenNebula – Open source data center virtualization

Reaching for the Stars

Article from ADMIN 17/2013
By
The OpenNebula enterprise cloud management platform emerged in 2005, so it has been on the market longer than many comparable products. In the current version 4.2 (code-named Flame), it has presented itself in a new guise.

OpenNebula [1] separates existing cloud solutions into two application categories – infrastructure provisioning and data center virtualization [2]  – and places itself in the latter group. This classification allows for clear positioning compared with other solutions – a topic I will return to later in this article.

What Is OpenNebula?

OpenNebula relies on various established subsystems to provide resources in the areas of virtualization, networking, and storage. This demonstrates a significant difference from alternative solutions like OpenStack and Eucalyptus, both of which favor their own concepts – as exemplified in storage by OpenStack via Swift.

In OpenNebula, these subsystems (Figure 1) are linked by a central daemon (oned). In combination with a user and role concept, the components are provided via a command-line interface and the web interface. This approach makes host and VM operations independent of the subsystem and allows for transparent control of Xen, KVM, and VMware. Mixed operations with these hypervisors are also possible – OpenNebula hides the available components in each case using a uniform interface. This transparent connection of different components shows the strength of OpenNebula: its high level of integration.

Figure 1: OpenNebula uses existing network virtualization and storage solutions integrated by a central daemon.

Structure

An important feature of OpenNebula is the focus on data center virtualization with an existing infrastructure. The most important requirement is to support a variety of infrastructure components and their dynamic use.

This approach is easily seen in the datastores. Their basic idea is simple: For example, although a test system can be copied at any time from the central image repository on a hypervisor, the last run-time environment must be recovered for a DB server. The multiply configurable definition of datastores within an OpenNebula installation provides the ability to adapt to these different life cycles. Thus, a persistent image can be located on an NFS volume while a volatile image is copied to the hypervisor at start time.

The configuration and monitoring stacks are completely separate in OpenNebula. A clear-cut workflow provides computing resources and then monitors their availability. Failure of the OpenNebula core has no effect on the run-time status of the instances, because commands are issued only when necessary.

Monitoring itself is handled by local commands depending on the hypervisor. Thus, the core regularly polls all active hypervisors and checks whether the configured systems are still active. If this is not the case, they are restarted.

By monitoring hypervisor resources like memory and CPU, a wide range of systems can be rapidly redistributed and restarted in case of failure. Typically, the affected systems are distributed so quickly in the event of a hypervisor failure that Nagios or Icinga do not even alert you to this within the standard interval. Of course, you do need to notice the hypervisor failure.

Self-management and monitoring of resources are an important part of OpenNebula and – compared with other products – already very detailed and versatile in their use. A hook system additionally lets you run custom scripts in all kinds of places. Thanks to the OneFlow auto-scaling implementation, dependencies can be defined and monitored across system boundaries as of version 4.2. I will talk more about this later.

Installation

The installation of OpenNebula is highly dependent on the details of the components, such as virtualization, storage, and network providers. For all current providers, however, the design guide [3] provides detailed descriptions and instructions that help avoid classic misconfigurations. The basis is an installation composed of four components:

  • Core and interfaces
  • Hosts
  • Image repository and storage
  • Networking

The Management Core (oned), in collaboration with the appropriate APIs and the web interface (Sunstone), forms the actual control unit for the cloud installation. The virtualization hosts do not need to run any specific software, with the exception of Ruby, but access via SSH must be possible to retrieve status data later, or – if necessary – to transfer images to all participating hosts.

Different approaches use shared or non-shared filesystems for setting up image repositories. Which storage infrastructure is appropriate is probably the most important decision to make at this point, because switching later is a huge hassle.

A scenario without a shared filesystem is conceivable, but this does mean sacrificing live migration capability. If a host were to fail, you would need to deploy the image again, and the volatile data changes would be lost.

Installation of the components can be handled using the appropriate distribution packages [4] or from the sources [5] for all major platforms, and is described in great detail on the project page. After installing the necessary packages and creating an OpenNebula user, oneadmin, you then need to generate a matching SSH key. Next, you distribute the key to the host systems – all done! If everything went according to plan, the one start command should start OpenNebula, and you should be able to access the daemon without any problems using the command line.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=