OpenStack: Shooting star in the cloud

Assembly Kit

Network Management: Neutron

If you plumb the depths of OpenStack, sooner or later you will end up on Neutron's doorstep (Figure 2). Neutron does not enjoy the best reputation and is regarded as overly complicated; however, this has much less to do with the component itself than with its extensive tasks. It makes sure that the network works as it should in an OpenStack environment.

Figure 2: Neutron lets you choose the virtual network to which the VM connects.

The issue of network virtualization is often underestimated. Typical network topologies are fairly static, and usually have a star-shaped structure. Certain customers are assigned specific ports, and customers are separated from each other by VLANs. In cloud environments, this system no longer works. On one hand, it is not easy to predict which compute nodes a customer's VM will be launched on; on the other hand, this kind of solution does not scale well.

The answer to this problem is Software Defined Networking (SDN), which basically has one simple goal: Switches are just metal, VLANs and similar tools are no longer used, and everything that has to do with the network is controlled by software within the environment.

The best-known SDN solution is OpenFlow SDN [4] (see the article on Floodlight elsewhere in this issue) with its corresponding front-end Open vSwitch [5].Neutron is the matching part in OpenStack, that is, the part that directly influences the configuration of Open vSwitch (or another SDN stack) from within OpenStack (Figure 3).

Figure 3: In the background, Neutron relies on both OpenFlow and Open vSwitch.

In practical terms, you can use Neutron to virtualize a complete network without touching the configuration of the individual switches; the various plugins also make it possible to modify switch configurations directly from within OpenStack.

Just like OpenStack, Neutron [6] is also modular: The API is extended by a plugin for a specific SDN technology (e.g., the previously mentioned Open vSwitch). Each plugin has a corresponding agent on the computing node side to translate the plugin's SDN commands.

The generic agents for DHCP and L3 both perform specific tasks. The DHCP agent ensures that VMs are assigned IP addresses via DHCP when a tenant starts; L3 establishes a connection to the Internet for the active VMs. Taken to an extreme, it is possible to use Neutron to allow every customer to build their own network topology within their cloud.

Customer networks can also use overlapping IP ranges; there are virtually no limits to what you could do. The disadvantage of this enormous feature richness, however, is that it takes some knowledge of topics such as Software Defined Network operations to understand what Neutron actually does – and to troubleshoot if something fails to work as it should.

Incidentally, if you have worked with OpenStack in the past, you might be familiar with Neutron under its old name, Quantum (Figure 4). (A dispute over naming rights in the United States led to the new name Neutron.)

Figure 4: In addition to the dashboard, the individual client tools at the command line give you the option of controlling the individual components.

VM Management: Nova

The components I have looked at thus far lay the important groundwork for running virtual machines in the cloud. Nova (Figure 5) [7] is now added on top as an executive in the OpenStack cloud. Nova is responsible for starting and stopping virtual machines, as well as for managing the available hypervisor nodes.

Figure 5: Nova consists of many components, including the scheduler or nova-compute.

When a user tells the OpenStack cloud to start a virtual machine, Nova does the majority of the work. It checks with Keystone to see whether the user is allowed to start a VM, tells Glance to make a copy of the image on the hypervisor, and forces Neutron to hand out an IP for the new VM. Once all of that has happened, Nova then starts the VM on the hypervisor nodes and also helps shut down or delete the virtual machine – or move it to a different host.

Nova comprises several parts. In addition to an API by the name of Nova API, Nova provides a compute component, which does the work on the hypervisor nodes. Other components meet specific requirements: nova-scheduler, for example, references the configuration and information about the existing hypervisor nodes to discover the hypervisor on which to start the new VM.

OpenStack has no intention of reinventing the wheel; Nova makes use of existing technologies when they are available. It is deployed, along with libvirt and KVM, on Linux servers, where it relies on the functions of libvirt and thus on a proven technology, instead of implementing its own methods for starting and stopping VMs. The same applies to other hypervisor implementations, of which Nova now supports several. In addition to KVM, the target platforms include Xen, Microsoft Hyper-V, and VMware.

Block Storage for VMs: Cinder

Finally, there is Cinder [8]; although its function is not a obvious at first glance, you will understand the purpose if you consider the problem. OpenStack generally assumes that virtual machines are not designed to run continuously. This radical idea results from the approach mainly found in the US that a cloud environment should be able to quickly launch many VMs from the same image – this principle basically dismisses the idea that data created on a virtual machine will be permanently stored. In line with this, VMs initially exist only as local copies on the filesystems of their respective hypervisor nodes. If the virtual machine crashes, or the customer deletes it, the data is gone. The principle is known as ephemeral storage, but it is no secret that reality is more complicated than this.

OpenStack definitely allows users to collect data on VMs and store the data persistently beyond a restart of the virtual machine. This is where Cinder enters the game (Figure 6): Cinder equips virtual machines with persistent block storage. The Cinder component supports a variety of different storage back ends, including LVM and Ceph, but also hardware-SANs, such as IBM's Storewize and HP's 3PAR storage.

Figure 6: The individual VMs on the computing nodes simply reside on a local filesystem – Cinder is necessary if you want to make them persistent.

Depending on the implementation, the technical details in the background can vary. Users can create storage and assign it to their virtual machines, which then access the storage much like they are accessing an ordinary hard drive. On customer request, VMs can boot from block devices, so that the entire VM is backed up permanently and can be started, for example, on different nodes.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=