Lead Image © Michal Bednarek, 123RF.com

Lead Image © Michal Bednarek, 123RF.com

Keeping the software in Docker containers up to date

Inner Renewal

Article from ADMIN 46/2018
By
Docker has radically changed the way admins roll out their software in many companies. However, regular updates are still needed. How does this work most effectively with containers?

For a few months, the hype around Docker has sort of died down, but not because fewer people are using the product – quite the opposite: The calm has arrived because Docker is now part of the IT mainstream. Production setups rely as a matter of course on the software that has breathed new life into containers under Linux and spurred other products, such as Kubernetes.

From an admin's point of view, Docker containers have much to recommend them: They can be operated with reduced permissions and thus provide a barrier between a service and the underlying operating system. They are easy to handle and easy to replace. They save work when it comes to maintaining the system on which they run: Software vendors provide finished containers that are ready to run immediately. If you want to roll out new software, you download the corresponding container and start it – all done. Long installation and configuration steps with RPM, dpkg, Ansible, and other tools are no longer necessary. All in all, containers offer noticeable added value from the sys admin's perspective.

Not Everything Is Better

Where there is light, there is shadow, and Docker is no exception. When new software versions are released, administrators often have a vested interest in using the new versions. Security updates are an obvious problem: You will want to repair a critical vulnerability in Apache or Nginx, regardless of whether the service is running on bare metal or inside a Docker container.

New features are also important. When the software you use is finally given the one feature you've been waiting for, for months or years, you'll want to use it as quickly as possible. The question then arises as to how you can implement an update when the program in question is part of a Docker container.

Docker-based setups do allow you to react quickly to the release of new programs. Anyone used to working with long-term support (LTS) distributions knows that they essentially run a software museum and have sacrificed the innovative power of their setup on the altar of supposed stability. They are used to new major releases of important programs not being available until the next update to a new distribution version, and by then, they will already be quite mature.

If you use Docker instead, you can still run the basic system on an LTS distribution, but further up in the application stack you have far more freedom and can simply replace individual components of your environment.

No matter what the reasons for updating applications running in Docker containers, as with normal systems, you need a working strategy for the update. Because running software in Docker differs fundamentally from operations without a container layer, the way updates are handled also differs.

Several options are available with containers, which I present in this article, but first, you must understand the structure of Docker containers, so you can make the right decisions when updating.

How Docker Containers Are Built

Most admins will realize that Docker containers – unlike KVM-based virtual machines – do not have a virtual hard drive and do not work like complete systems because the Docker world is based on kernel functions, such as cgroups or namespaces.

What looks like a standalone filesystem within a Docker container is actually just a folder residing in the filesystem of the Docker host. Similar to chroot, Docker uses kernel functions to ensure that a program running in a container only has access to the files within that container.

However, Docker containers comprise several layers, two of which need to be distinguished: the image layer and the container layer. When you download a prebuilt Docker image, you get the image layer. It contains all the parts that belong to the Docker image and is used in read-only mode. If you start a container with this image, a read-write layer, the container layer, is added.

If you use docker exec to open a shell in a running container and use touch to create a file, you will not see a "permission denied" error, specifically because the container layer allows write access. The catch, though, is that when the modified Docker container is terminated, the changes made in it are also irrevocably lost. If the container restarts later from the same base image, it will look exactly the same as after the first download.

From the point of view of container evangelists, this principle makes sense: The idea of a container is that it is quickly replaceable by another, so if something needs to be changed in a container, it makes sense simply to replace the container.

Far too easily, container advocates forget that not every application on the planet follows the microservice mantra. Nginx and MariaDB are not divided into many services that arbitrarily scale horizontally and are natively cloud-capable. The approach of replacing a running container also works in this case, but requires more preparation and entails downtime.

Option 1: Fast Shot

The list of update variants for Docker containers starts with option 1: the simple approach of installing an update directly in the running container. If you use a container based on Ubuntu, CentOS, or another common Linux distribution, you typically have the usual set of tools, including rpm or dpkg, which the major distributors use to generate their container images.

That's why it's easy to install updates: Using docker exec, you start Bash in the running container and then run

apt-get update && apt-get upgrade

(in this example) to load updates into the container. However, you will soon be confronted with the problem I referred to earlier in this article: The changes are not permanent. They also bloat the image file of the running container in the host system.

However, such a process can make sense: For example, if a zero-day vulnerability appears for a central service, and exploits are already out there in the wild, updating the running application in the container gives you a short breather in which to schedule the correct update. Either way, it's true: If you import updates for distribution packages within a container, you should also update the basic images on your own system in parallel.

In some cases, the option of a service update in the container is not technically available: Anyone who has built a Docker container so that it defines the service (e.g., MariaDB) as an ENTRYPOINT or as a CMD in the Dockerfile has a problem. If you were to kill -9 in this kind of container, the entire container would disappear immediately. From Docker's point of view, the program in question was the init process, without which the container would not run.

For all other scenarios except serious security bugs that immediately force an update, the urgent advice is to keep your hands off this option. It contradicts the design principles of containers and ultimately does not lead to the desired sustainable results.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=