Lead Image © Franck Boston, 123RF.com

Lead Image © Franck Boston, 123RF.com

Rancher Kubernetes management platform

Building Plans

Article from ADMIN 71/2022
By
Rancher has set up shop as an agile alternative to Red Hat OpenShift as an efficient way to manage Kubernetes clusters. In terms of the architecture, a Rancher setup differs significantly from classic Kubernetes.

When the talk turns to Kubernetes, many instantly think of the major league products by Red Hat and Ubuntu. Red Hat OpenShift is a massive, complex Kubernetes distribution with a large number of components and a connected app store for container applications. Although Rancher now belongs to a major Linux distributor, SUSE has largely kept out of the Rancher developers' way, so far, which is why Rancher has kept much of the simplicity its fans have loved for years. Therefore, anyone who wants to get started with Kubernetes today can find a way to do so with Rancher, and the entry bar is set low.

Even so, a Rancher setup is not a no-brainer. Various factors have to be considered with regard to the hardware. More specifically, the number of machines and their dimensioning deserves attention. Once the hardware is ready for use in the rack, the next step is to install a Kubernetes distribution, because Rancher sets up its own infrastructure completely on Kubernetes. What sounds complicated is not a problem in practice, because Rancher comes with its own Kubernetes core distribution in tow in the form of K3s, which provides all the features you need.

In this article, I guide you through the implementation of Rancher in your setup. The project starts on a proverbial greenfield site, the objective being a Rancher cluster that can be used in production. To achieve this setup, you need to be introduced to a few terms from the Rancher world that confront administrators regularly.

Architecture

If you have already worked with one of the other major Kubernetes distributions, you might be a bit overwhelmed by Rancher's core features at first, because Rancher works differently from most competitor products. A comparison quickly makes this clear: OpenShift comprises a management plane, also known as the control plane, which includes all of the central services in the environment. OpenShift uses precisely one Kubernetes instance for its own management. If users start containers in Kubernetes, the containers run in the existing cluster and use its existing infrastructure.

Rancher takes a different approach. It not only sees itself as a tool for managing workloads in Kubernetes, but also as a tool for controlling and managing Kubernetes environments. Therefore, applications do not run in the Kubernetes cluster that Rancher requires for its Kubernetes components. Instead, Rancher assumes that each end-user setup is a separate Kubernetes installation, which Rancher then also operates.

Because Rancher installs agents in these satellite setups, it can also control the workloads in these secondary Kubernetes clusters. Consequently, the programs that a Kubernetes cluster is supposed to run in Rancher never run in the same instance as the Rancher services themselves, but in a separate Kubernetes environment that Rancher controls downstream.

Some administrators might already be cringing at this point, because – at first glance – the Rancher approach seems clumsy and, above all, not very efficient in terms of resource usage. In fact, the infrastructure components of Kubernetes that are part of each Kubernetes cluster generate some overhead themselves, but because the individual Kubernetes instances in Rancher rarely run thousands of pods – you would instead run individual Kubernetes instances if they are not part of the same setup – the additional computational overhead is manageable.

On the plus side, you get genuinely clean isolation of individual applications in the Kubernetes cluster, as well as comprehensive monitoring and alerting capabilities for each end-user Kubernetes. This approach also means that Rancher gives you an option that is missing in other environments: It can manage the commercial Kubernetes offerings found in AWS, Azure, or Google Cloud, if need be.

Hardware

Once a company has decided to use Rancher, the first thing on the agenda is its basic deployment, for which you need hardware. Rancher can be operated in a completely virtualized form, and nothing can stop you from doing so in principle, but if certain nodes are running third-party workloads belonging to completely different applications on top of the Rancher Kubernetes components, resource bottlenecks can be a concern.

Like any other application, Rancher feels most at home on its own hardware. Because the setup in this example is intended to depict Rancher in a production environment as realistically as possible, I am also assuming that you have bespoke Rancher hardware. To achieve high availability, you need two servers, each with 128GB of RAM and at least 32 virtual CPUs (vCPUs). What the manufacturer means by vCPUs in this context is not entirely clear from the documentation.

However, you can safely assume that a current Xeon with 24 physical cores (i.e., 48 threads) is powerful enough to run most Rancher setups. Rancher itself states that a machine of this dimension can reasonably run 2,000 clusters and up to 20,000 compute nodes, even with the Rancher database grabbing two cores and 4GB of RAM for itself in the background. Input/output operations per second (IOPS) are more important for the database, so the systems should ideally be on fast flash memory for the servers.

In a normal setup, the worker nodes (i.e., the hosts on which the user Kubernetes clusters run) will have at least two systems. The documentation refers to these as nodes, compute nodes, or target nodes, but they are the same servers in all cases. In terms of hardware, the systems just need to be powerful enough. Unlike their full and paravirtualized counterparts, containers do not require CPU time to run their own vCPUs and Linux, but the containers with their applications will still want a good helping of RAM. If you dimension the target nodes like KVM nodes, you can't normally go wrong.

The situation is different with the nodes for the Rancher components, however: You need to pay attention to the local storage media of the systems for the compute nodes because that is where the containers are stored during operation. With a fast but small NVMe-based drive, the installation might freeze on you. Throw in a few gigabytes, and you will be on the safe side in most cases.

Tricky Question

For Rancher services to run on a system at all, Linux is an absolute prerequisite, which means you get to choose your distribution of choice. The temptation is often to use what you know from your own experience, but by doing so, you might be wasting a great opportunity, because Rancher does not offer its components in the form of classic software packages: Rancher is also packed in containers.

Therefore, any distribution suitable for running containers with Docker or Podman is also a candidate for Rancher. From the administrator's point of view, you have no reason to comit to a full-fledged Linux – a microdistribution such as CoreOS (Figure 1) will do the trick. Flatcar Linux is an alternative, as well, and I have already looked at that in detail in the past [1]. Ubuntu Core is yet another option. Rancher even had its own mini Linux named RancherOS, but it is no longer officially supported.

Figure 1: Rancher does not require the target systems to have a full-fledged Linux distribution. CoreOS or Flatcar Linux are sufficient to run container workloads. © Red Hat

If you don't like the idea of a microdistribution, you'll still do fine with the usual candidates, but make sure from the outset that the system you roll out is as lean as possible. Introducing Rancher gives you a great opportunity to take care of automation. Kickstart rules for AlmaLinux or Rocky Linux can be created quickly and will save you a huge amount of work, especially when scaling the platform. Make sure from the outset that you automate the system configuration on the individual servers to the extent possible.

Clearly, a sensibly planned Rancher setup requires some work up front before you even get started. The overhead pays off later, though, because the cluster can then be expanded quickly and easily.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=