Lead Image © Ian Holland, 123RF.com

Lead Image © Ian Holland, 123RF.com

Container microdistributions k3OS and Flatcar

Micro Cures

Article from ADMIN 60/2020
In contrast to legacy systems, container-only distributions make do with an absolute minimum of software, which radically simplifies maintenance.

As anyone who has spent a couple of years as a Linux admin will note: IT has changed more in the past decade than ever before. The cloud has been shaking up the industry since 2010, and containers have increasingly been the focus of interest since 2015, at the latest. Container automation is being taken one step further in the form of micro operating systems (OSs), which have nothing to do with microkernels; rather, they refer to the software set that comes with the product. Today, the first premise of container automation is to reduce to the max the software needed on systems. The intent is to avoid maintenance overhead, because where nothing is installed, nothing needs updating and maintaining.

In this article, I explain the basic principle of the approach and then introduce k3OS [1] and Flatcar [2], two microdistributions that are seen as alternatives to systems by the established vendors.


The major vendors have consistently trained administrators over decades to cope with increasingly complex systems. A distribution is nothing more than a compiled Linux kernel, which – together with a huge bunch of administrative tools – finds its way onto a server hard drive.

Current systems in particular come with considerable complexity on board. One typical example is the package manager: To avoid admins having to compile every program themselves, the distributors deliver many programs in packaged form. To be able to update them, package managers use a lot of tricks to avoid trouble (Figure 1). Debian, for example, makes many compromises in Dpkg to prevent packages from overwriting changes in /etc.

Figure 1: RPM and Dpkg dominate the classic distributions, sharing very little common ground with microdistributions. More than 1,000 packages per system are the rule rather than the exception.

However, every admin knows from experience that, whatever magic the package manager has up its sleeve, it will not help you when worst comes to worst. Updates from one major version to another remain a challenge. If something goes wrong, the system can be offline – for minutes, hours, or sometimes even days. Having high availability (HA) as part of the standard scope of common distributions is of little consolation, because the complexity introduced by HA software is itself massive.

The problems do not stop with the package manager. If you compare, for example, the way Linux configured its systems a few years ago with NetworkManager and others, you can't help noticing that manufacturers have paid for the ever-increasing variety of functions with complexity that the admin has to handle at the end of the day.

New Possibilities

Part of the complexity is difficult to eliminate. If systems have several bonding interfaces (e.g., on which VLANs are then located that are controlled over a network bridge), configuring the network on the server is inevitably going to be complex. However, in other areas you can start taking things off the list – and microdistributions show how this can be done.

When Docker finally made container virtualization under Linux socially acceptable, the distributors switched to gold rush mode. Docker offered a realistic perspective at the time for saying goodbye to parts of package management. Instead of loading software in the form of RPM packages onto the system, admins would simply load and start the appropriate container. Because each container is a self-contained system, all dependencies of the respective application are included. Actual user data is transferred to the container at runtime by mount. If an update is pending, the user loads the new container with the new program version, detaches the volume from the old container, attaches it to the new one, and launches – all done.

In this workflow, however, the rules of the game change dramatically: A system running on Linux mutates from a complex environment to a simple tool for operating containers. You don't need much in the way of software on the system side: the kernel, a few basic components like systemd or a cluster consensus tool like Etcd or Consul, and the runtime environment for the containers themselves (nowadays either Docker or Podman) – that's all. That the distributors are following exactly this path can be seen by both Red Hat and Canonical now having package managers that are based on container principles. Snaps in Ubuntu (Figure 2) and Flatpaks in Fedora are nothing more than tools to load and manage container images like traditional packages.

Figure 2: With Canonical's Snapshots, the basic system can be limited to a skeleton installation.

Container Orchestration Help

The truth is that operating concepts in which the admin manually runs individual applications in container form on systems are possible with these solutions. However, the distributors' focus is clearly on container orchestrators and, above all, Kubernetes.

The ideal scenario then looks like this: A basic installation of the OS only includes the runtime environment for container and kubelet (i.e., the control agent in Kubernetes). As soon as a node goes online, it automatically becomes part of the fleet that Kubernetes manages. All changes to the system, as well as starting and stopping services and containers, are handled through Kubernetes, so administrators do not have to do anything manually on the system.

This process again reduces the infrastructure overhead per node for the admin considerably, because the container orchestrator is running anyway. Without Kubernetes or a corresponding alternative, operating a container is not only inefficient, but ultimately futile.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Rancher Kubernetes management platform
    Rancher has set up shop as an agile alternative to Red Hat OpenShift as an efficient way to manage Kubernetes clusters. In terms of the architecture, a Rancher setup differs significantly from classic Kubernetes.
  • Rancher manages lean Kubernetes workloads
    The Rancher lightweight alternative to Red Hat's OpenShift gives admins a helping hand when entering the world of Kubernetes, but with major differences in architecture.
  • An interview with CoreOS cofounder Brandon Philips
    CoreOS was cofounded in 2013 by Brandon Philips, a former SUSE Linux kernel developer. Since then, CoreOS has gained fame as a specialized Linux with the focus on clusters and containers. We caught up with Philips at LinuxCon North America to talk about CoreOS, 25 years of Linux, and the new challenges facing the modern IT infrastructure.
  • Safeguard and scale containers
    Security, deployment, and updates for thousands of nodes prove challenging in practice, but with CoreOS and Kubernetes, you can orchestrate container-based web applications in large landscapes.
  • The new OpenShift version 4
    Red Hat launched the brand new OpenShift 4 with a number of changes that might suggest upgrading or even getting your feet wet if you've stayed out of the pool so far.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=