Lead Image © Dmitriy Melnikov, Fotolia.com

Lead Image © Dmitriy Melnikov, Fotolia.com

Combining containers and OpenStack

Interaction

Article from ADMIN 41/2017
By
The OpenStack cosmos cannot ignore the trend toward containers. If you want to combine both technologies, projects like Magnum, Kolla, and Zun come into play. Which one?

Most IT users rely on containers à la Docker [1], rkt [2], and LXD [3] as platforms for processing data. Sooner or later, cloud solutions such as OpenStack [4] also have to contend with containers; currently, several possibilities exist. The most obvious solution is to anchor the container technology within OpenStack with the use of services such as Magnum [5] and Zun [6]. The other case is less obvious, wherein the container technology runs OpenStack, so it is outside the cloud. This approach includes the Kolla project [7].

Incidentally: The idea of using containers as a basic infrastructure can also be found in Google's Infrastructure for Everyone Else (GIFEE) project [8]. This community has been the impetus for recent developments in the field of OpenStack in containers. Depending on your preference, you can operate containers above OpenStack, below it, or even in combinations of the two. In this article, I briefly introduce container-related OpenStack projects and explain their goals and interactions.

Zun

The Zun project has only been around for about a year. Its goal is to provide an interface for managing containers as native OpenStack components (e.g., like Cinder, Neutron, or Nova). Thus, it is the successor to Nova Docker [9], which has now been discontinued.

Nova Docker was a simple way to integrate containers in OpenStack. Managing containers followed the same principles as managing virtual machines (VMs), but this integration shortcut meant that the benefits of container technology were largely lost. At the end of the day, Docker and the like are not VMs.

Zun now fills the gap, supporting container management while still part of the OpenStack universe. As a result, you get seamless integration with other components such as Keystone and Glance and abstraction from the underlying container technology.

You do not have to deal with the complexity of Docker versus rkt versus LXD. This abstraction is undoubtedly ambitious. Zun thus starts with the basics – the simple acts of creating, updating, and deleting containers. In this context, the documentation occasionally references CRUD (Create, Read, Update, Delete).

Figure 1 shows the Zun architecture and its integration with OpenStack. Zun comprises two components: The Zun API is used to communicate and interact with the user, and in the background, Zun Compute interacts with the OpenStack components via drivers and manages the resources for containers, such as communicating with Glance to provide the necessary images or with Neutron for the network. Another project plays an important role: Kuryr [10] forms the bridge between the network worlds, with OpenStack on one side and containers on the other.

Figure 1: Zun architecture and integration with OpenStack.

Zun's target group is OpenStack users who want to use containers in addition to bare metal and VMs. The requirements are quite low: You do not need any special management software for the containers themselves or for the underlying hosts. The aforementioned CRUD approach is sufficient (Listing 1).

Listing 1

Creating and Deleting a Container

$ zun run --name pingtest alpine ping -c 4 8.8.8.8
 **
$ zun list
 +--------------------------------------+----------+--------+---------+-----------------------------+
 | uuid                                 | name     | image  | status  | task_state| address    | prt|
 +--------------------------------------+----------+--------+---------+-----------+------------+----+
 | 36adtb1a-6371-521a-0fa4-a8c204a9e7df | pingtest | alpine | Stopped | None      | 172.17.5.8 | [] |
 +--------------------------------------+----------+--------+---------+-----+-----+------------+----+
 **
$ zun logs test
 PING 8.8.8.8 (8.8.8.8): 56 data bytes
 64 bytes from 8.8.8.8: seq=0 ttl=40 time=31.113 ms
 64 bytes from 8.8.8.8: seq=1 ttl=40 time=31.584 ms
[...]
$
$ zun delete pingtest

Magnum

According to the Git repository, the roots of Magnum go back to 2014, with the first release in 2015. Magnum's original mission was split between providing Container as a Service (CaaS) and Container Orchestration as a Service (COaaS), with its focus on COaaS.

Magnum's objective now is to provide a management platform for containers with the help of OpenStack [11]. Magnum aims to make Container Orchestration Engines (COEs; e.g., Kubernetes [12], Docker Swarm [13], and Apache Mesos [14]) available as resources in OpenStack.

Compared with older versions of the Magnum architecture, many components have been dropped. The container, pod, and service constructs are no longer of any interest. Now, clusters (formerly bays) and cluster templates (formerly bay models) are the central components. The OpenStack COE project is Heat (Listing 2).

Listing 2

Integrating Heat Templates

$ magnum-template-manage list-templates --details
+------------------------+---------+-------------+---------------+------------+
| Name                   | Enabled | Server_Type | OS            | COE        |
+------------------------+---------+-------------+---------------+------------+
| magnum_vm_atomic_k8s   | True    | vm          | fedora-atomic | kubernetes |
| magnum_vm_atomic_swarm | True    | vm          | fedora-atomic | swarm      |
| magnum_vm_coreos_k8s   | True    | vm          | coreos        | kubernetes |
| magnum_vm_ubuntu_mesos | True    | vm          | ubuntu        | mesos      |
+------------------------+---------+-------------+---------------+------------+
$

If you are starting from scratch, the first step is to create the cluster template. Important pieces of information are the orchestration software to be used and the images to generate the server. However, the devil is in the details. If necessary, Magnum also can use a private registry to store the container images, for which you then have to provide additional information about the size of the data storage and which driver is necessary for access.

Other specifications relate to the network addresses of the DNS server to be used or details of how the container data can be stored in a non-volatile manner. For the first step, it is worth turning to the templates provided.

The template then lets you create the cluster on the assigned infrastructure using OpenStack tools. Two programs work behind the scenes. The first is the Magnum API server, which organizes the external interface by accepting requests and providing the corresponding information. For reliability, or simply scaling, it is possible to run multiple API servers simultaneously.

The process forwards requests on the interface to the second Magnum component, the Conductor. The Conductor interacts with the corresponding orchestration instances (Listing 3).

Listing 3

Magnum API and Conductor

$ ps auxww | grep -i magnu
stack17987 0.1 1.2 224984 49332 pts/35S+15:450:19 /usr/bin/python /usr/bin/magnum-api
stack18984 0.0 1.4 228088 57308 pts/36S+15:450:06 /usr/bin/python /usr/bin/magnum-conductor
$

Kubernetes, Mesos, and Docker Swarm can be operated without OpenStack, so what value is gained from Magnum? Magnum is aimed at those who want to manage their containers within OpenStack.

OpenStack can provide genuine added value beyond pure infrastructure: Multitenancy and other security mechanisms result quite naturally from Keystone integration. By design, it is not possible to have containers of different OpenStack clients running on a host. Discussions about whether the Hypervisor or the Docker host can be trusted are thus almost meaningless.

When it comes to Zun and Magnum, containers can be managed individually or as part of a whole with standard OpenStack tools. However, the question remains as to whether it all can be combined: Yes, but ….

Interim Conclusions

Strictly speaking, Magnum and Zun have no direct connection. The intermediaries between the two OpenStack projects are the COEs (i.e., container management). Besides Docker, the container also can control the API provided by Zun. The possible operations are, of course, limited to the lowest common denominator of the container implementations, but containers as an OpenStack service cannot deliver more, anyway.

Mixing Zun and Magnum is not recommended. The two projects serve different target groups, and combining them is of more interest academically than practically. When it comes to the use of individual containers as native OpenStack resources, Zun is the right choice (e.g., to conduct isolated tests or for installations that need no more than a handful of Docker instances). However, if you want OpenStack to serve as the basis for more complex applications, including life cycle and infrastructure management for containers, then Magnum is the better choice.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=