Lead Image © cepixx, 123RF.com

Lead Image © cepixx, 123RF.com

Safeguard and scale containers

Herding Containers

Article from ADMIN 36/2016
By
Security, deployment, and updates for thousands of nodes prove challenging in practice, but with CoreOS and Kubernetes, you can orchestrate container-based web applications in large landscapes.

Since the release of Docker [1] three years ago, containers have not only been a perennial favorite in the Linux universe, but native ports for Windows and OS X also garner great interest. Where developers were initially only interested in testing their applications in containers as microservices [2], market players now have initial production experience with the use of containers in large setups – beyond Google and other major portals.

In this article, I look at how containers behave in large herds, what advantages arise from this, and what you need to watch out for.

Herd Animals

Admins clearly need to orchestrate the operation of Docker containers in bulk, and Kubernetes [3] (Figure 1) is a two-year-old system that does just that. As part of Google Infrastructure for Everyone Else (GIFEE), Kubernetes is written in Go and available under the Apache 2.0 license; the stable version when this issue was written was 1.3.

Figure 1: Kubernetes comes with a unique web interface for managing pods, nodes, and containers.

The source code is available on GitHub [4]; git clone delivers the current master branch. It is advisable to use git checkout v1. 3.0 to retrieve the latest stable release (change v1. 3.0. to the latest stable version). If you have the experience or enjoy a challenge, you can try a beta or alpha version.

Typing make quick-release builds a quick version, assuming that both Docker and Go are running on the host. I was able to install Kubernetes within a few minutes with Go 1.6 and Docker 1.11 in the lab. However, since version 1.2.4, you have to resolve a minor niggle by deleting the $CDPATH environment variable using unset CDPATH to avoid seeing error messages.

What is more serious from an open source perspective is that parts of the build depend on external containers. Although you can assume that all downloaded containers come from secure sources – if you count Google's registry to be such – the sheer number of containers leaves you with mixed feelings, especially in high-security environments.

A build without preinstalled containers shows that it is possible to install all the components without a network connection, but the Make process fails when packaging the components for kubi-apiserver [5] and kubelet [6]. For a secure environment, you might prefer to go for a release that uses only an auditable Docker file-based repository. (See also the "Runtimes and Images" box.)

Runtimes and Images

Docker is just one of several existing container runtimes, some of which differ considerably and some in important details, such as the process layout below the Docker daemon.

With regard to separation of concerns, Systemd [8] acts as a centralized resource management instance and has been discussed by the rkt project [9]. The system features that container runtimes should and are allowed to assume is controversial. If you ask Lennart Pöttering, container runtimes handle generic Systemd tasks, and since version 230 [10], also moves processes to namespaces, thanks to Nspawn, and limits resources. Systemd could thus replace the runtimes. What is missing are the push and pull functions to pick up and drop off images in the registries.

Cluster To Go

After the install, you can set up a test environment in the blink of an eye: (1) select a Kubernetes provider and (2) fire up the cluster:

export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh

After a few minutes, you should have a running cluster consisting of a master and three worker nodes. Alternatively, the vagrant provider supports an installation under VirtualBox on Macs [7].

All Aboard

Kubernetes's plan is to contain all the components required to create your own PaaS infrastructure out of the box. It automatically sets up containers, scales them, self-heals, and manages automatic rollouts and even rollbacks.

To orchestrate the storage and network, Kubernetes uses storage, network, and firewall providers, so you first need to set these up for your home-built cloud. If you want to build deployment pipelines, Kubernetes helps with a management interface for configurations and passwords and supports DevOps in secure environments with complaint – and without a password if you have more than one configuration in a repository.

Kubernetes promises – and it is by this that it must be judged – that you will no longer need to worry about the infrastructure, only about the applications. There is even talk of ZeroOps, an advancement on DevOps. Of course, that will still take a long time. Ultimately, it is just like any other technology: For things to look easy and simple, someone needs to invest time and money.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=