The new OpenShift version 4

Big Shift

OpenShift: A Kubernetes Distribution

The OpenShift Container Platform is advertised as a PaaS platform based on Kubernetes. First and foremost, OpenShift is a Kubernetes distribution. It has many things in common with competing products such as SUSE's Container as a Service (CaaS). The primary goal of OpenShift is to give admins a Kubernetes installation as quickly as possible, on which to run workloads in containers.

As you may have guessed, Red Hat enriches the product with some functions beyond Kubernetes – a special sauce, so to speak, for winning over customers.

Hello CRI-O!

Witness the change from Docker to CRI-O, whose "primary goal is to replace the Docker service as the container engine for Kubernetes implementations, such as OpenShift Container Platform" [3]. Docker, remember, is a combination of a userspace daemon and a format for disk images. If you start a Docker container, the Docker daemon combines various kernel functions under the hood to ensure that the data and services in a container are separated from the rest of the system (e.g., with namespaces and cgroups). Other container approaches, such as LXC, are based on exactly the same functions.

Red Hat has no interest in Docker bringing the container market, and the distribution of basic Red Hat images, under its wing almost single-handedly, so they finally decided to launch a rival product: the CRI-O format. The CRI-O container engine implements the Kubernetes Container Runtime Interface (CRI) and is a platform for running Open Container Initiative (OCI)-compatible run times, including Docker.

In OpenShift 4, Red Hat is serious about the Docker replacement. Previously, CRI-O was only available as an alternative to Docker; now it is exactly the other way around. CRI-O is the default format used in OpenShift 4, and Docker is only available as fallback.

Operator Framework

OpenShift 3.11 introduced the Operator Framework, also a component from the CoreOS developers. It is specifically aimed at administrators, not users, of Kubernetes clusters. In short, the promise is that the administrator can use the Operator Framework to get all the services needed on a system for efficient Kubernetes operation.

This is how it works: As a native Kubernetes application, the solution includes an application that runs as a container under Kubernetes but is controlled by the Kubernetes API, for which Kubernetes has controllers, which are extensions of the API itself.

An everyday example would be Prometheus instances and its alert manager, which the admin controls with the Kubernetes API, but which also supplies data from Prometheus by way of the API. The glue needed to ensure this control into and out of the container is the foundation of the Operator Framework that Red Hat has adopted from CoreOS.

An Operator Lifecycle Manager (OLM), which aids in the administration of Operators, is then added. Letting every admin construct their own Operators doesn't make sense, so CoreOS and Red Hat, together with AWS, Google, and Microsoft, put OperatorHub [4] in place, acting much like Docker Hub, but for OpenShift Operators. Manufacturers offer ready-made Operators, and users can make their Operators available to others in a separate area.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=