Photo by nikko osaka on Unsplash

Photo by nikko osaka on Unsplash

Keeping container updates under control

Hazardous Goods

Article from ADMIN 66/2021
By
Some application developers try to handle containerized applications as if they were conventional monoliths, but managing updates and security patches in containers needs a totally different approach.

A common fiction is that containers with arbitrary applications appear practically out of nowhere, and magical tools roll them out in a fully automated process and with the correct configuration. The developer's apps then integrate seamlessly into Platform as a Service (PaaS) environments in the style of microarchitecture applications on the basis of OpenShift, Rancher, and the like and simply work because mesh solutions such as Istio ensure communication. Some would have you believe that you no longer need to worry about updates, because the fully automated continuous integration/continuous deployment (CI/CD) pipeline ensures that new versions of services mysteriously find their way into the production setup of an enterprise.

Reality, needless to say, looks far less rosy. Despite all the promises made by the manufacturers, it takes some effort to get an application running on the PaaS model in any one of the countless Kubernetes distributions. Once up and running, it doesn't mean it will stay that way: The CI/CD build pipeline, out of which updated images drop (e.g., when someone upstream releases a new version of a library you use), requires more serious work.

This reality of containers translates to stress every time an upstream project introduces updates for a component you use. The situation becomes especially hairy if it's not a functional update, but a security fix.

In this article, I look in detail at how you can keep workloads in containers secure and up to date, with processes that are tightly interwoven with the platform in container-based environments. More particularly: How do you make best use of the capabilities offered by Rancher and its ilk to arrive at a secure platform with reliable tools that the common PaaS stacks bring to the table?

I can already reveal this much: The matter is certainly not as simple as the manufacturers would have you believe.

Secure from the Factory

The providers of container orchestrators always suggest in their offerings that containers are implicitly more secure than conventional environments. After all, Podman and its ilk no longer need the rights of the root system administrator; moreover, each container is completely isolated and only communicates with the other containers through the network.

However, this consideration criminally ignores a few very pertinent facts. An attacker who has successfully broken into a container remains locked in there and therefore cannot escalate to root privileges on the host – which is good, as long as the goal of the attack was to misuse the entire machine. However, if the goal was to spy on the application, protection against attacks on the host system is of no help at all.

As usual in the security context, it turns out that the statement "X is more secure than Y" primarily depends on the threat scenario an installation is facing, which also means that you should never assume that containers offer enhanced security. Like their conventional predecessors, containers require regular security fixes and audit procedures. The way you handle security updates therefore also plays a significant role in the container context.

Other Updates

Keep in mind that in a container-based environment you are dealing with a completely different deployment scenario than in conventional environments. Where containers do not play a role, package management is usually the linchpin when supplying software to systems. The large distributors see a part of their business as bundling countless components of free (and partly proprietary) software into a complete package so you can use them without further ado.

The container use case, however, turns this concept on its head. Component tuning hardly matters at all from your point of view: Almost every application simply brings along the complete userspace it needs with its container. Hard drives and even fast flash drives no longer cost an arm and a leg, so the overhead of a few hundred megabytes that occurs when you install, for example, a prebuilt MariaDB container, complete with dependencies, is of little consequence. Most admins are willing to make this additional financial investment if they can avoid for good the notorious dependency hell of packaged systems.

However, applications in container environments always come with a complete userspace on top that affects the way you need to approach the issue of updates in container-based environments. Even if you ignore the security implications, it is no longer sufficient to update a particular package on the hosts to the latest version on all systems in the installation. The component can also exist in the containers, and the versions can be different from those on the hosts. The more widespread the component, the more likely it is that this will be the case. For example, a bug in the library that takes care of TLS connections would be a major disaster and would probably force updates across the entire installation – on both the physical systems and in the containers.

No Simple Updates

On the desktop, the familiar package manager works reasonably well for updates, but it is not quite that trivial in containers. Docker and Podman at least let you execute arbitrary commands in a container from the shell, which exists in almost every container. Resourceful admins will sometimes come up with the idea of simply installing an update within the container. Most of the official container images are based on some distribution and have their own package management on board.

However, the idea of simply updating the containers in place falls short for several reasons. The first and most obvious reason is that, with containers, the entry point is usually the program that the container is supposed to run. A MariaDB container, for example, uses either MariaDB or a wrapper that somehow starts the database as its entry point. If you use an update in the container to replace a library that the database needs, you then need to restart the database so that it can use the new library. However, this kills the main process running in the container and therefore the container itself.

Problem 2 goes hand in hand with this challenge: From an admin's point of view, it is not enough to restart the crashed container – that is not possible on the host system once it is gone. Most containers today are designed to create an OverlayFS (union mount filesystem) based on the finished, self-contained image during use. However, that disappears with the container. In other words, if you restart the MariaDB container, the original image is used, because the update only affected the OverlayFS. Of course, the update you installed then disappears. Whatever the situation, this is not the path to eternal happiness.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=