Exploring Kubernetes with Minikube

Kubernetes Kickoff

Article from ADMIN 47/2018
By
Minikube lets you set up Kubernetes in a local environment, so you can get some practice before rolling it out in a network or cloud setting.

Special Thanks: This article was made possible by support from  Linux Professional Institute

If somehow you've missed out on ten years of cutting-edge technology, permit me to fill in the gaps: containers are king and orchestrators are required to ensure that containers behave properly.

Over the past few years, Docker has served as a reliable container runtime with improved networking and a number of other important features. More recently, the Kubernetes [1] orchestration environment has emerged from the camp of Google. Since then, the popularity of Kubernetes has expanded exponentially.

The meteoric rise of Kubernetes follows the explosion of containers in all facets of IT. Kubernetes, which was created by Google, offers the ability to manage otherwise-unmanageable, multiple containers on multiple hosts for resilience and scalability while retaining the portability and speed for software releases within containers.

Kubernetes offers rolling updates to minimize disruption when a new feature is released, and the Kubernetes environment provides the ability to scale, load balance, and provide redundancy to applications should a container (or a collection of containers, known as a pod ) go offline.

For many users, however, Kubernetes remains a black art due to a relatively steep learning curve. Commercial products, such as Red Hat's OpenShift, have attempted to improve access to Kubernetes, but versatile tools such as OpenShift can be as nuanced and complicated as working with Kubernetes directly.

One easy and accessible way to start experimenting with Kubernetes is to use Minikube [2]. Minikube is a tool that is designed to let the user work with Kubernetes locally through a virtual environment. The official Kubernetes site states: ``Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.''

For most Kubernetes beginners, installing locally is much easier than learning the integration quirks of a cloud provider. With Minikube and a little Docker knowledge under your belt, it's perfectly possible to begin learning the basics once you complete the somewhat arcane installation.

This article describes how to get Kubernetes up and running on a local Linux system using Minikube, so you can can experiment with it and see if you would like to deploy it on a larger scale. I'll use KVM as a virtual machine and deploy a pair of nginx web server images in the cluster as a proof of concept. Of course, you can modify this configuration as needed to customize this experiment for your own environment.

I'll use Ubuntu 16.04 ``Xenial Xerus'' LTS for this article. If you are using a different version or a different Linux distribution, some of the steps might vary. If you get stuck, see the Minikube GitHub page for additional installation information.

Easy Peasy

Start by installing two packages for the KVM virtual machine. Ideally, your local laptop or desktop will already be using the Intel VT or AMD-V hardware extensions that support virtualization. (Check your BIOS settings to see if they exist, and then enable them if they're present and disabled.) If your system doesn't support these hardware extensions, see the box entitled ``Alternatives to Virtualization.''

Alternatives to Virtualization

According to one very nicely written page [3], KVM will still work without the hardware extensions, but it will be much slower.

It is also possible to run Minikube directly on the host without a virtual evironment. According to the documentation at the Minikube GitHub page, ``Minikube also supports a -vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Docker is required to use this driver but no hypervisor. If you use -vm-driver=none , be sure to specify a bridge network for Docker. Otherwise, [the network settings] might change between network restarts, causing loss of connectivity to your cluster.''

For Debian-based package system, the command for installing KVM is:

$ apt install libvirt-bin qemu-kvm

Add the text in Listing 1 to a little script and make it executable. The few lines in Listing 1 are all you need to add the Kubernetes key for your package manager, as supplied by the master of containers, Google.

Listing 1:Adding the Key

01 apt-get update && apt-get install -y apt-transport-https
02 curl -s 
   https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
03 cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
04 deb http://apt.kubernetes.io/ kubernetes-xenial main05 EOF

Install the kubectl command line interface package with:

apt-get update
apt-get install -y kubectl

For security reasons, you might wish to add the preceding commands to a simple script called kubectl_install.sh and make it executable, then run the script to install the Kubectl package: :

$ chmod +x kubectl_install.sh
$ ./kubectl_install.sh

If you're interested, the commands reference for kubectl is available at the Kubernetes website [4].

The next step is to add Minikube to the system with the following commands (split up into three commands for ease of reading):

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2$ chmod +x docker-machine-driver-kvm2
$ sudo mv docker-machine-driver-kvm2 /usr/local/bin

The first command downloads the docker-machine-driver-kvm2 package using curl . The second command makes the download executable, and the third copies that command into a system path, which should be seen system-wide (feel free to choose which $PATH you wish to use instead of /usr/local/bin ). If you're unsure that you've copied it correctly into your user's path, run the following command and look for a response about Plugin binaries to denote success:

$ docker-machine-driver-kvm2

You can also run echo $PATH if you don't know the existing paths for your user and you want to know where to copy your binary.

Look online for a short list of KVM commands [5].

To start up Kubernetes with Minikube and KVM, simply run the following command:

$ minikube start --vm-driver kvm

In order to stop the Minikube instance, you can run the following command:

$ minikube stop Stopping local Kubernetes cluster... Machine stopped.

If you reboot or come back to Minikube after a period of time and this command doesn't work, check the Troubleshooting section for information on how to fix the problem.

Looking Around

Now onto the good stuff. Barring any hideous errors, you should be up and running.

Kubernetes splits up resources (think on a per-customer basis, as a simple example) into namespaces , and as a result, in order to see everything within a cluster, you need to ask for --all-namespaces .

The following command shows all pods in the cluster and across all of the available namespaces:

$ kubectl get pods --all-namespaces

The output appears in Figure 1.

Figure 1: Output from all pods running in all namespaces in the Kubernetes cluster.

Figure 1 shows that the cluster is responding with lots of running pods.

Mikikube offers a simple dashboard GUI for managing the Kubernetes cluster (Figure 2). To start the dashboard, enter:

Figure 2: You can learn a few useful things about Kubernetes by rummaging around the dashboard in Minikube.
$ minikube dashboard

The dashboard is intended for commands that don't require root access. If you need to execute a privileged command, such as the virsh commands, you're better off at the command line. See the box entitled ``Minikube Network Error'' if you have any trouble getting Minikube started.

Minikube Network Error

If you see a network error such as: Error starting host: Error starting stopped host: Error creating VM: virError(Code=55, Domain=19, Message=`Requested operation is not valid: network `minikube-net' is not active') then fret not.

Type this command first as the root user to list the VMs present within KVM.

$ virsh list

You should see minikube listed, as in in Figure 3.

Now, as the root user, type virsh to enter it (as per Figure 4) and then type this command at the virsh # prompt:

virsh # net-start minikube-net

That should bring your networking up so you can start the VM as usual afterwards.

You should be aware that the KVM driver now displays deprecation warnings if you don't install the second version of the driver (KVM2).

If you get stuck then much of the above information can be found at the official Kubernetes site, although Minikube's Github page also provides lots of useful tips.

Figure 3: A list of the VMs running in KVM.
Figure 4: Type ``virsh'' and ``net-start minikube-net'' if you see a network error.

Engine X

You need something to play with inside your shiny Kubernetes cluster in order to demonstrate how to use the orchestrator.

Copy the contents of Listing 2 into a file called nginx.yml . Indents and spacing can be a killer in YAML (``YAML Ain't Markup Language'' [6]), so be careful.

Listing 2:nginx.tml

01  ---
02
03 apiVersion: extensions/v1beta1
04 kind: Deployment
05 metadata:
06   name: nginx-dep
07 spec:
08   replicas: 2
09   template:
10     metadata:
11       labels:
12         run: nginx-dep
13     spec:
14       containers:
15       - name: nginx-dep
16         image: nginx17         ports:
18         - containerPort: 80
19
20 ---
21
22 apiVersion: v1
23 kind: Service
24 metadata:
25   name: nginx-svc
26   labels:
27     run: nginx-svc
28 spec:
29   type: NodePort
30   ports:
31   - port: 80
32     protocol: TCP
33   selector:
34     run: nginx-dep

Listing 2 configures the latest official nginx container (via image:nginx ) and also making sure two replicas are running for resilience via a Deployment , which is then presented by the nginx-svc service. This should give you two pods.

Enter the following command to ingest the contents of the nginx.yml file in Listing 2:

$ kubectl create -f nginx.yml

Assuming your formatting and syntax are correct (try this YAML checker to check the formatting [7]) then the magical Kubernetes springs to life and immediately gets jiggy with creating pods, a deployment, and a service.

Run the following command afterwards:

$ kubectl get pods

The output is shown in Figure 5. This command applies to the default namespace, so there was no reason to specify a namespace.

Figure 5: Running the command ``kubectl get pods'' in the default namespace.

Figure 5 shows that two nginx pods are running, and that's because the config file asked for two replicas for redundancy reasons. If one pod fails, you will still have a web server available. The deployment will also restart another pod if one fails.

The -o wide option offers more networking information than the standard command:

$ kubectl get pod nginx-dep-54b9c79874-b9dzh -o wide

See the output in Figure 6.

Figure 6: Good old ``get pods'' but with width: ``kubectl get pod nginx-dep-54b9c79874-b9dzh -o wide'' showing an internal pod IP Address in the 172.17.0.0 range.

Use the -n option to specify a namespace:

$ kubectl get pods -n namespace_name

To get the deployments running in the default namespace:

$ kubectl get deployment

See Figure 7.

Figure 7: The output of ``kubectl get deployment''.

Or add the -o wide '' addition to view the replicas:

$ kubectl get deployment -o wide

See Figure 8.

Figure 8: The command ``kubectl get deployment -o wide'' shows the replicas in place and more.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Monitoring container clusters with Prometheus
    In native cloud environments, classic monitoring tools reach their limits when monitoring transient objects such as containers. Prometheus closes this gap, which Kubernetes complements, thanks to its conceptual similarity, simple structure, and far-reaching automation.
  • Safeguard and scale containers
    Security, deployment, and updates for thousands of nodes prove challenging in practice, but with CoreOS and Kubernetes, you can orchestrate container-based web applications in large landscapes.
  • A Hands-on Look at Kubernetes with OpenAI
    For research into deep learning algorithms that automatically acquire new skills, OpenAI operates some of the largest Kubernetes clusters worldwide, with up to 36,000 CPU cores. We look at some practical experience with the container management system.
  • Troubleshooting Kubernetes and Docker with a SuperContainer
    Keep your containers lean and use a SuperContainer for troubleshooting live services with your favorite tools.
  • Correctly integrating containers
    If you run microservices in containers, they are forced to communicate with each other – and with the outside world. We explain how to network pods and nodes in Kubernetes.
comments powered by Disqus