Kubernetes Auto Analyzer

Securing Kubernetes

Three Sheets to the Wind

With a working installation, you can begin execution. The Kubernetes Auto Analyzer tool presents help output, but you need to speak to the Kubernetes cluster in an authenticated manner.

First, for ease on modern Kubernetes installations, you need to generate a Bash variable that you pull out of a Kubernetes secret. To extract a TOKEN  variable, I’m going to use the command shown in LIsting 2. The resulting text will be several lines long. Next, I need the IP address that’s visible from my API server. I can discover this by looking at the output from the describe command. I then transpose the IP address from the lengthy output and query TCP port 8443 over HTTPS using $TOKEN  with the preferred report name chrisbinnie_k8s.html , as you can fathom from the next kubeautoanalyze r command. Without RBAC (role-based access control) errors, you should find the file in your current directory.

Listing 2: Getting and Using a Token

$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t'); echo $TOKEN
bTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.gyjpaChSSdpq2WQqsFB81noKQShT19XkoO7620t70w8GVSRt3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5[snip ...]
$ kubectl describe po kube-apiserver-minikube -n kube-system
$ kubeautoanalyzer -s -t $TOKEN -r chrisbinnie_k8s --html

Second, on my Kubernetes v1.10 (Minikube) installation, I need to enable RBAC permissions, which I’m relatively new to on Kubernetes. Therefore, unless you know what you’re doing, only add the RBAC code snippet in Listing 3 on a test cluster  – and even then, delete it afterward –because it provides full cluster-admin  permissions. You have been warned! If you encounter ServiceAccount -style RBAC errors, then hopefully this solution will work for you.This RBAC snippet is taken shamelessly from a GitHub discussion and adapted for my use.

Listing 3: An RBAC Snippet for a Test Cluster

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: kube-auto-analyzer-rbac
  - kind: ServiceAccount
    name: default
    namespace: default
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

If you need RBAC permissions enabled, then simply save the content of Listing 3 into a file (e.g., kube-auto-analyzer-rbac.yaml ) and apply it:

$ kubectl create -f kube-auto-analyzer-rbac.yaml

Remember, this RBAC role probably isn’t production-ready, because it offers full admin privileges! Delete it afterward, or read up on RBAC if you’re not using a test cluster.

Ship Shape and Bristol Fashion

Kubernetes Auto Analyzer now produces excellent and extremely useful output. The CIS insights it offers are exceptional, and the reporting style is precise and very easy to decipher (Figure 2). Figure 3 shows more direct references to issues found in the etcd  implementation (which is a relatively unusual implementation in Minikube).

Figure 2: A few unwelcome red errors in the summary section of the Kubernetes Auto Analyzer report.
Figure 3: Warnings about the etcd“wal” files.

Apparently the warnings in Figure3 regarding “wal” files in Kubernetes relate to the following statement, according to comments in the etcd file: “A WAL is created at a particular directory and is made up of a number of segmented WAL files. Inside of each file the raft state and entries are appended.” Some further research in CIS Benchmarks might offer more insight if you see errors you can’t figure out.

As well as Evidence sections (which helped create the report), a Kubernetes Authorization Options section, and a Vulnerability Evidence section (Figure 4), you will find many other useful entries in the report.

Figure 4: A vulnerability section in the report offers useful assistance.

In the Offing

Now that you have seen some of the intricate reporting output from Kubernetes Auto Analyzer, you can see the value of this tool. With ever-evolving capabilities and, as a result, changes to its attack surfaces, Kubernetes is tricky to secure without some expert help. The underlying container run time in use within your Kubernetes cluster (e.g., Docker) can bring its own minefield of security options, too, so getting your orchestrator hardened should be a key concern.

Whether your approach is running Kubernetes Auto Analyzer automatically and comparing historical report results or periodically auditing your Kubernetes cluster manually as new features or versions are enabled, the tool provides valuable insight into what the industry considers to be the most insecure Kubernetes settings.

If you refer to the NCC GitHub page mentioned earlier, you can keep up with the latest versions of Kubernetes that the tool supports. As mentioned, the Kubernetes release cycle is ticking along at high knots, so you need to stay on top of the latest versions when it comes to security. Having seen the quality of its reports, I hope you agree that running Kubernetes Auto Analyzer regularly is an extremely worthwhile task.

If you’re still not convinced about how imperative Kubernetes security is, here’s a final word from Kubernetes Auto Analyzer author McCune, when I asked what the most worrisome issue in the Kubernetes security space was currently. His insightful reply confirmed my experiences of seeing Kubernetes in its early days.

There's the early adopters who started using Kubernetes several version[s] ago, and who have had production clusters up and running for a while. The problems there can be more serious, as they may have started using the software before some security controls were available, so we might find unauthenticated access to the kubelet (which essentially provides root access to the underlying system). On more modern clusters, that kind of thing is less prevalent, but there are still some things to think about.

He continued with an interesting comment regarding certificate authorities if third-party add-ons are not being used in hand with Kubernetes because of the current lack of support: “… if a user loses their certificate, there's no way for it to be invalidated without entirely rebuilding the certificate authority on the cluster, which may not be an easy undertaking in a production system.”

Rate of Knots

There’s little doubt in my mind that Kubernetes will adapt and mature to take care of any issues I looked at briefly here. The sheer speed of its evolution is staggering, which brings many challenges. In the meantime, it is a very valuable exercise to plug as many of the gaps as possible. You can achieve this by introducing network policies to segment networks and their access between customers or components, firming up RBAC controls, configuring cluster-wide Pod Security Policies, and considering Security Context constraints. 

Once you’ve made those changes, make sure you periodically create auditing reports with a sophisticated tool like Kubernetes Auto Analyzer to check that new or deprecated features don’t leave your cluster vulnerable.

The Author

Chris Binnie’s latest book, Linux Server Security: Hack and Defend , shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against such attacks. In the book, he also shows you how to make your servers invisible, perform penetration testing, and mitigate unwelcome attacks. You can find out more about DevOps, DevSecOps, containers, and Linux security on his website: https://www.devsecops.cc.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Kubernetes clusters within AWS EKS
    Automated deployment of the AWS-managed Kubernetes service EKS helps you run a production Kubernetes cluster in the cloud with ease.
  • Monitoring container clusters with Prometheus
    In native cloud environments, classic monitoring tools reach their limits when monitoring transient objects such as containers. Prometheus closes this gap, which Kubernetes complements, thanks to its conceptual similarity, simple structure, and far-reaching automation.
  • Nested Kubernetes with Loft
    Kubernetes has limited support for multitenancy, so many admins prefer to build multiple standalone Kubernetes clusters that eat up resources and complicate management. As a solution, Loft launches any number of clusters within the same control plane.
  • Correctly integrating containers
    If you run microservices in containers, they are forced to communicate with each other – and with the outside world. We explain how to network pods and nodes in Kubernetes.
  • Secure access to Kubernetes
    Kubernetes comes with a sophisticated system for ensuring secure access by users and system components through an API. We look at the options for authentication, authorization, and access control.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=