Safeguard and scale containers

Herding Containers

Resources and Network

Docker supports almost all types of resource management under Linux. In practice, Kubernetes does not completely hand over control [16]; version 1.3 only displays the CPU and memory resources. If want to use this to tame your Hadoop distributions, for example, you will be disappointed because of a lack of control over network and block I/O performance.

Various network models are available. In your own data center, it certainly makes sense to separate the management network from the network to which the Kubernetes containers connect. If you start a cluster with the libvirt-coreos provider on the virtual network, you can isolate it quite easily.

A plugin that supports the container network interface (CNI) [17] helps you switch networks on and off by network provider. A number of other interfaces that you can integrate into your networks are available in the form of Flannel [18], Calico [19], Canal [20], and Open vSwitch [21].

The API server lets you read the details of the services with the kubectl get service command, which connects the services with a load balancer [22]. The Kubernetes Ingress resource also provides a proxy with which you can configure directly the common paths for the web user view in Kubernetes [23].


Beyond these test examples, Kubernetes secures all communication processes using server and client certificates, including processes between kubectl or kubelet and the API server, and between the /etcd/inetd.conf instances, as well as registries such as the Docker Hub.

For an application to run cleanly as a microservice, it must meet some conditions for Kubernetes to recognize that it's still alive. Logging to stdout is required for kubectl log to display the data correctly.

If you want to design your own microservice, take another look at the architecture of Kubernetes [24]. From the load balancer, through the web layer and possibly a cache, up to the business logic, all applications are stateless.

Admins can standardize architecture thanks to the container design pattern by Brendan Burns [25], Kubernetes' lead developer. For example, if you operate a legacy application, you can let logging, monitoring, and even messaging ride along as a sidecar in a second container of the pod, which allows communication with the outside world.

Also, only one process can run in any container of a pod. A common problem is admins moving logging to the background or running tail -f against stdout. Both will keep Kubernetes from identifying the process as dead. As a result, you cannot clear away and replace the container and thus the pod.

All cattle applications must be genuinely stateless, replicate databases in their own way, and automatically synchronize after interruptions. You need to test failure scenarios that respond to the failure of a node and the network, including recovery solutions, just as you test applications. This is a challenge, especially for classic SQL databases, to which NoSQL databases such as MongoDB are a response.

Language Skills

Image sizes also affect the deployment process. There are very lean programming languages such as Go, and there are languages that require a 150MB image for a minimal version. Cleaning up the container caches gives you an easy way to reduce the sizes significantly. The installation of containers FROM scratch is also recommended for size and safety considerations. Container operators will find a statically linked Go web server with an image size of just 6.7MB.

If you need to compile Ruby Gems or Python Eggs, it is advisable to dump the compiler and all the unused files into a black hole when done. For Ruby, Traveling Ruby [26] presents an attractive alternative. It links all Gems and the Ruby executables statically to a file, providing you the ability to set up minimal containers with Ruby.

For Java, rather than J2EE, use Spring Boot [27], which lets you configure applications at the command line and start a web server from within the application. Java is unfortunately very difficult to streamline with its own package system and branched properties.

Otherwise, it makes sense to create debug-enabled or trimmed-down and hardened containers for each programming language, as well as for any development, test, and production environment. Additionally, you will want to limit your own applications to just one distribution, so that as many containers as possible can reuse the corresponding base image.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus