A multicluster management tool for Kubernetes

One Stop Shop

Meaningful Workload Distribution

No placement rules exist, so you can use OCM to distribute simple workloads statically to the managed nodes. To do so, grab an existing YAML file that describes an application and can be easily rolled out to a cluster with:

kubectl apply -f

I used a plain vanilla NGINX template in my lab. With the clusteradm tool, you can easily delegate existing definition files to a cluster:

clusteradm create work nginx1 -f nginx.yml --clusters one --context kind-hub

The OCM hub then sends the request to the one cluster, which subsequently generates and starts all the resources listed in the YAML file (Figure 2). You can view the status of your application with the hub:

clusteradm get works --cluster one
Figure 2: On the hub, OCM generates a workspace of the same name for each managed cluster. The management tool stores the manifests in the workspace.

The regular kubectl tool also shows the loads, for example, if you type:

kubectl get manifestwork -A --context kind-hub

As an alternative to clusteradm, kubectl generates and rolls out works. All you need to do is extend your existing YAML file. The declaration typically starts with something like:

apiVersion: apps/v1
kind: Deployment
metadata:
      namespace: default
...

If you use a work agent, the whole thing looks a little different (Listing 1). Note that in this example, the manifest defines a static assignment to the one cluster (Figure 3).

Listing 1

ManifestWork Deployment

apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
      namespace: one
      name: nginx1
spec:
      workload:
           manifests:
                - apiVersion: v1
                     kind: Deployment
                     metadata:
                          namespace: default
...
Figure 3: An NGINX demo rollout running on the managed one cluster as specified by the manifest on the hub cluster. The OCM agent (klusterlet) is responsible for implementing OCM requirements locally.

If you manage more than one cluster with OCM, first create placement rulesets (kind:Placement), and then distribute the workloads to the managed clusters to comply with the rulesets:

clusteradm create work nginx1 -f nginx1.yml --placement <namespace>/<placement-ruleset name>

How placements currently work is documented in detail on the project's website [5]. Describing the many options in detail would go beyond the scope of this article.

Conclusions

OCM provides the appropriate underpinnings for multicluster management for Kubernetes. In addition to the integrated functions for management and placement, the tool also comes with an add-on manager. Other projects extend the functionality, but only commercial implementations offer a GUI for OCM. Fortunately, as its use in hybrid environments increases, it is probably only a matter of time until free Kubernetes user interfaces offer a graphical view of OCM.

If you are interested in managing edge devices that use a miniature version of Kubernetes or just a simple container runtime like Podman, you might look at Flotta (see the "Flotta Alternative" box).

Flotta Alternative

The open source Flotta [6] project pursues a similar goal to OCM. The focus also is on a Kubernetes cluster as a central management platform. However, Flotta does not manage other Kubernetes clusters; instead, it manages edge devices without Kubernetes. The target systems only run Podman as a container runtime. Unlike OCM, Flotta does not use the existing Kubernetes API with its own extensions but needs a separate port for its management API. Edge devices must have access to the Flotta port on the management cluster, but Flotta does not require persistent access to the devices. The tool does not even require the edge devices to be permanently connected to the Flotta service.

The managed nodes then run the Flotta agent, which executes the assigned workloads with Podman and systemd on the edge device in question. The agent checks regularly (if the edge device's network connection allows) to see whether the Flotta server has jobs for it. Of course, the tool cannot simply delegate an arbitrary Kubernetes application to an arbitrary edge device. As an admin, you first need to make sure that the desired workloads will work with Podman.

Unfortunately, there has been little news of the project lately. The main sponsor is Red Hat, who is currently putting more emphasis on MicroShift for edge devices and has therefore moved a number of Flotta developers to the MicroShift or OCM teams. However, don't write Flotta off completely just yet. With the abundant interest in edge implementations without Kubernetes, maybe more community developers can be found to continue this interesting project. If you want to try it out, you need a simple Kubernetes setup running the Flotta operator and service and a Fedora 36 VM as a simulated edge device.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=