Secure access to Kubernetes

Avoiding Pitfalls

Accounts for Humans

Authenticating a human user with a certificate works in a similar way. First, generate a certificate signing request (CSR). It is important to store the username as a CommonName (CN) and map all the desired group memberships via organizations. With OpenSSL, this would be:

$ openssl req -new -keyout testuser.key -out testuser.csr -nodes -subj "/CN=testuser/O=app1"

Now sign the request with the CA certificate and the key of the Kubernetes cluster. If you use the ca command in OpenSSL, you will typically have to modify openssl.cnf to remove mandatory fields like country.

The user then packs the certificate, including the key, into their ~/.kube/config file. Instead of the token: element in the service account, the client-certificate-data and client-key-data fields exist in the user context. They contain the Base64-encoded version of the certificate and private key, which the YAML configuration file expects in one line. If OpenSSL is used to generate the certificate, only the block between BEGIN CERTIFICATE and END CERTIFICATE is relevant and needs to be packaged.

Roles and Authorization

Now two entities are accessing the cluster that have passed the first step of authentication mentioned at the beginning, but they do not yet have any rights in the cluster. To change this, you need to shift the configuration focus to the authorization section.

The first concept you need to look into here is the role, which describes the goal of an action. The attributes of a role include:

  • the namespace in which the role acts,
  • the function it performs (read, write, create something, and so on),
  • the type of objects it accesses (pods, services), and
  • the API groups to which it belongs and that extend the Kubernetes API.

Besides the normal role that applies within a namespace, the ClusterRole applies to the entire Kubernetes cluster. Kubernetes manages roles through the API (i.e., you can work at the command line with kubectl).

YAML files describe the roles. Listing 3 shows a role file that allows an entity to read active pods. If you want to create a cluster role, you instead need to enter ClusterRole in the kind field and delete the namespace entry from metadata. Now, save the definition as pod-reader.yml and create the role by typing:

Listing 3

Role as a YAML File

01 apiVersion: rbac.authorization.k8s.io/v1
02 kind: Role
03 metadata:
04   namespace: default
05   name: pod-reader
06 rules:
07 - apiGroups: [""] # "" stands for the core API group
08   resources: ["pods"]
09   verbs: ["get", "watch", "list"]
kubectl apply -f pod-reader.yml

Again, be sure to pay attention to the namespace in which you do this.

The question still arises as to who is allowed to work with the rights of this role. Enter RoleBinding or ClusterRoleBinding, with which you can assign users and service accounts to existing roles.

In concrete terms, Kubernetes determines in the first step of an incoming API request whether or not the user is allowed to authenticate at all. If this is the case, it tries to assign a role to the user with RoleBinding. If this also works, Kubernetes finds out which rights the user has on the basis of the role found and then checks whether the request is within the permitted scope.

To specifically link testserviceaccount and testuser created previously with the pod-reader role, you would create the RoleBinding action (Listing 4). After creating the bindings, testuser can finally read the pods in the default namespace with kubectl get pods.

Listing 4

RoleBinding Example

01 apiVersion: rbac.authorization.k8s.io/v1
02 kind: RoleBinding
03 metadata:
04   name: read-pods
05   namespace: default
06 subjects:
07 - kind: User
08   name: testuser
09   apiGroup: rbac.authorization.k8s.io
10 - kind: ServiceAccount
11   name: testserviceaccount
12 roleRef:
13   kind: Role
14   name: pod-reader
15   apiGroup: rbac.authorization.k8s.io

Role Behavior

If you want to manage what a user can access, you need to restrict the role. For example, some resources are arranged hierarchically. If testuser should only be able to access the logs child resource of the pods resource, the entry under resources in the role definition in Listing 4 is "pods/logs".

Once you have created your Kubernetes cluster with kubeadm, you will see a ClusterRole named cluster-admin that provides full access. If you assign this role to a user, the user has unrestricted access privileges.

To avoid an inflation of single roles, at least on the level of the entire cluster, you can have aggregated roles among ClusterRoles. In this case, the individual roles are assigned a label field with a specific value. The aggregated role then gathers all roles that have a label with this value and thus forms a union of all these roles. A ClusterRole created in this way can again be used in the ClusterRoleBinding, and a user assigned to the role then enjoys all the rights assigned to the role.

In addition to the role-based authorization described above, attribute-based authorization applies a more complicated set of rules to API-based access by kubelets. For example, the Node Authorizer evaluates the requests by reference to the sender. In webhook mode, Kubernetes first sends all API requests from the users in JSON format to an external REST service, which then replies with True or False .

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Linking Kubernetes clusters
    When Kubernetes needs to scale applications, it searches for free nodes that meet a container's CPU and main memory requirements; however, when the existing hardware is at full capacity, the Kubernetes Cluster Federation project (KubeFed) takes the pain out of adding clusters.
  • Monitoring container clusters with Prometheus
    In native cloud environments, classic monitoring tools reach their limits when monitoring transient objects such as containers. Prometheus closes this gap, which Kubernetes complements, thanks to its conceptual similarity, simple structure, and far-reaching automation.
  • Run Kubernetes in a container with Kind
    Create a full-blown Kubernetes cluster in a Docker container with just one command.
  • Kubernetes clusters within AWS EKS
    Automated deployment of the AWS-managed Kubernetes service EKS helps you run a production Kubernetes cluster in the cloud with ease.
  • Kubernetes Auto Analyzer
    The fast pace of Kubernetes development can patch and introduce security vulnerabilities between versions. The Kubernetes Auto Analyzer configuration analyzer tool automates the review of Kubernetes installations against CIS Benchmarks.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=