Lead Image © grafner, 123RF.com

Lead Image © grafner, 123RF.com

Simple, small-scale Kubernetes distributions for the edge


Article from ADMIN 80/2024
We look at three scaled-down, compact Kubernetes distributions for operation on edge devices or in small branch office environments.

Production Kubernetes clusters use several physical servers, whether the Kubernetes nodes run directly on the hardware or use a virtualization layer – although not actually needed today. Many articles published about Kubernetes rely on single-node setups for practical examples because it is what application developers primarily use. At the same time, the number of scenarios in which single-node setups also make sense in practical use is increasing. More and more users are equipping edge devices with a simple Kubernetes environment.

On one hand, you can use a central cluster management system such as Open Cluster Manager [1] to manage these devices. On the other hand, you only need to develop and test your applications for a single platform: Kubernetes. A practical example includes merchandise management systems. The components for warehousing, invoicing, and ordering run on the central Kubernetes clusters in the data center, whereas small edge servers with the point-of-sale (POS) application in a Kubernetes container are fine for the in-store POS systems.

More powerful Kubernetes platforms (e.g., Rancher (SUSE) or OpenShift (Red Hat)) are not easy to set up on a single node. Although technically feasible, it makes little sense because full-fledged platforms run several dozen containers themselves. For this reason, various manufacturers offer lightweight Kubernetes distributions that only need a few containers for edge operation.

In this article, I look at three of these distributions: K3s (SUSE), MicroShift (Red Hat), and MicroK8s (Canonical). Of course, you will find other lean distributions, such as Kubernetes in Docker (KinD), Minikube, and k3d, but I do not look at those options here because they are primarily intended for use on developer desktops.


K3s [2] is the smaller sibling of the Rancher Kubernetes Engine (RKE), which became part of SUSE in December 2020. The distribution has been around since 2019 and can be operated with minimal resources. According to the manufacturer, K3s even runs on machines with only one CPU core and 512MB of RAM; the minimalist K3s setup itself only uses 250MB. As one of the radical cost-cutting measures, K3s dispenses with the I/O-intensive etcd database for configuration storage and uses SQLite instead, which is fine for single-node operation on an edge device.

K3s also supports multinode operation. The distribution distinguishes between a K3s server, which runs Kubernetes management pods, and a K3s agent, which only runs application pods. This ability turns out to be very practical in edge use. If a single edge device can no longer handle the workloads on its own, you simply add a second agent node. That said, K3s also lets you operate several K3s servers for control plane redundancy, but with etcd in this case.

K3s thus has a flexibility that other small Kubernetes distributions lack: You can start with a single node and expand the Kubernetes environment as required during operation. K3s can convert an existing single-node setup with SQLite to etcd and expand the control plane to the usual three-node setup.

Further cost-saving measures can be identified in the storage, network, and proxy realms. K3s relies on the Traefik reverse proxy for IP routing instead of the standard but more resource-intensive NGINX, so you might need to change the existing deployment, stateful sets, or both configurations of your applications. Ingress routers are often defined by NGINX and not by Traefik. Flannel, the simple overlay network with VXLAN, is used for the virtual pod networks. Other container network interface (CNI) drivers are available as options.

By default, K3s uses the simple Hostpath driver as the storage provider. Although it only supports filesystem storage and does not offer redundancy, it works fine for edge operation. Hostpath has no special requirements in terms of the node's disk or logical volume manager (LVM) setup and is also frugal in its use of resources. As with the network, K3s is flexible when it comes to storage. If Hostpath is not up to the task, an arbitrary other storage provider (Longhorn, Rook) can be retrofitted.

Installing K3s could not be easier. Almost all common Linux distributions are supported, including SUSE Linux Enterprise Server (SLES), openSUSE, Red Hat Enterprise Linux (RHEL), Fedora, CentOS Stream, and Enterprise Linux clones (with active SELinux), as well as Ubuntu, Debian, and Raspberry Pi OS. In terms of CPUs, the small Kubernetes supports the usual suspects, 64-bit ARM and x86_64 processors, and the distribution is even said to run on the s390x. Like most Kubernetes distributions, K3s uses CRI-O as its container engine. Thanks to 64-bit ARM support, K3s also runs on NVIDIA developer boards (e.g., Jetson, AGX, Xavier, and Orin) with support for the GPU installed there.

The installation script can be downloaded and executed directly. It installs the packages and repositories required for the setup and launches the local services and pods. In a single-node setup without any special configuration changes, you can have Kubernetes up and running on a computer within a few minutes (Figure  1). To manage the setup, you can use the command line or client admin tools such as k9s or OpenLens.

Figure 1: Running the basic K3s setup, including the active metric collector, required just seven pods (six active containers) in the test setup and used less than 1GB of RAM.

All told, K3s is an easy-to-install, resource-saving, and extremely flexible distribution. I like the option of expanding a single-node setup to include simple agents or subsequently upgrading the setup to a cluster.

MicroShift (RHDE)

The lightweight edge version of Red Hat's OpenShift unofficially goes by the name "MicroShift" [3] because the trademark for this term (spelled "microSHIFT") is owned by a company that manufactures bicycle gears. The Red Hat Device Edge (RHDE) project has existed since 2022 and is still in the development phase. You currently require a Red Hat account to install. MicroShift loads container images from the closed Red Hat Container Registry, requiring a pull secret to do so. A free developer account is all you need.

MicroShift packs the essential Kubernetes services into a single systemd service on the edge host. This package also includes etcd as the config store. According to the manufacturer, 1GB of RAM and one CPU core (ARM64 or x86_64) are all you need. MicroShift itself requires 500MB of RAM (more than you need for K3s), because NGINX for the reverse proxy, etcd, Open vSwitch (OVN instead of Flannel networking), and TopoLVM modules are used (Figure 2).

Figure 2: Whereas K3s and MicroK8s use Hostpath as the storage driver with just one container, MicroShift relies on TopoLVM, which grabs eight containers.

The installation on RHEL 9 relies on an RPM package and the DNF package manager. To install, you first need to add two repositories. Alternatively, MicroShift can be bundled into an OSTree image by the image builder. This rollout path is preferred for Kubernetes edge computing distributions, as well. The setup requires a special layout of the connected disk. Because MicroShift relies on TopoLVM as the storage driver, you must have a volume group named RHEL on the system with free space for logical volumes. When initially setting up the RHEL system, you must ensure that the "root" LV leaves you with sufficient free space.

MicroShift also uses the CRI-O container runtime, which the setup routine installs automatically. When first launched, MicroShift needs a few minutes to start the associated pods and services. TopoLVM in particular as a storage driver tends to take a little more time.

In addition to the regular Kubernetes APIs, MicroShift provides a few OpenShift extensions for security and routing. OpenShift routing as an alternative to Ingress routing is particularly interesting for users who use OpenShift on other clusters, which means you do not have to adapt existing deployments and stateful sets, at least in terms of routes. Multinode operation is planned for future versions but has not yet been implemented.

The lightweight edge version of OpenShift already offers solid functions but cannot keep pace with the flexibility of K3s as of this writing – particularly with optional multinode operation. What I liked about MicroShift was its integration with the image builder for OSTree at the edge and that OpenShift routing is included. As an open source project, however, MicroShift is currently still too heavily dependent on commercial products such as RHEL and the closed Red Hat Registry. One hopes a completely open source project will appear in the future that will then also run on free systems such as Fedora without a Quay account. The current upstream documentation also leaves a lot to be desired: It describes how to set up version 4.8 from April 2022 and not the current 4.14 version.

Whether the MicroShift project's switch from the Hostpath storage driver to TopoLVM was a good idea remains to be seen. Although TopoLVM supports functions such as persistent volume (PV) snapshots, it has higher resource requirements (no fewer than eight containers), more time for PV provisioning, and a more complex disk configuration.


MicroK8s [4] is Canonical's lightweight version of Kubernetes. Here, too, the manufacturer specifies the minimum RAM requirement of the distribution at around 500MB. The basic installation does not include an ingress router such as Trafik or NGINX, nor does it install a default storage driver. Canonical groups these functions in add-ons, which can be installed on top with the microk8s command. In this way, users can pick and choose what they need for their setups. MicroK8s also does away with etcd as the config store in the basic version. A proprietary distributed in-memory variant of SQLite, Dqlite, is used instead.

Unlike most other Kubernetes distributions, MicroK8s does not rely on the usual CRI-O container engine preferred by the Kubernetes project. Instead, Canonical, like Docker, embeds containerd directly. To make matters worse, the MicroK8s setup is not supported by the regular (Ubunut/Debian) Apt package manager but uses Canonical's Snap packaging system. Therefore, the required binaries and configuration files are not stored in the default Linux directories (e.g., /etc/, /var/) but somewhere below /snap (/microk8s/xxxx/, /bin/), making debugging and troubleshooting more difficult.

Installations with CRI-O, for example, provide the crictl tool to retrieve information from running containers and images. A similar function for containerd is supposed to be provided by ctx but does not work well because of Canonical's weird Snap setup. As a result, MicroK8s does not work on systems with SELinux – only with Canonical's AppArmor. Calico is used as the network driver instead of Flannel. MicroK8s can also work with other CNIs, relying on add-ons, such as Kube-OVN (Open vSwitch), to do so.

The installation on an Ubuntu 22.04 LTS server is very simple. You can add the MicroK8s Snap during the operating system installation. The microk8s command-line tool provides information about the current setup and also manages the desired add-ons. To add nodes, you use microk8s-cli. The tool generates an individual token for each new node; a new host that you set up can use the token to join the cluster later. The running system is managed with microk8s kubectl or with the kubectl tool on a workstation. MicroK8s provides the Kubernetes dashboard as an add-on (Figure 3).

Figure 3: MicroK8s installs the Kubernetes dashboard as an admin user interface with a single command.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=