Linking Kubernetes clusters

Growing Pains

Deployment

Before deployment can start, you need to federate the resource types you want to deploy. These can be pods, services, namespaces, and so on. Again, the kubefedctl tool is used for this step. For example,

kubecfedctl enable pods

lets you distribute resources of the Pod type. If a Kubernetes namespace exists that you want to federate as a whole, you can do so with the command:

# kubefedctl federate namespace demo --contents --enable-type

The --enable-type option ensures that the missing types are also federated immediately. On top of this, you can create a FederatedNamespace type resource that lets you control at namespace level the clusters on which K8s creates resources. This even works at the single deployment level, but it does cause more administrative overhead. However, if you rely on cloud providers where privacy concerns exist, this method gives you clear control over where containers are running.

Setting up a namespace the right way for use in a federation also means having a YAML file like the one in Listing 2. The fedtestns namespace has to exist before the create step. The placement parameter lets you manage the clusters to which K8s rolls out the resources. To distribute the pods on both sides of the cluster, you need to feed them in differently.

Listing 2

Federated Namespace

---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: fedns
  namespace: fedtestns
spec:
  placement:
    clusters:
    - name: earth
    - name: vulcan

A simple web server is used as a test case. Normal unfederated deployment results in the pods running in the local cluster only. The command for federating the namespace along with its contents distributes all the existing resources but does not ensure that newly created resources are also evenly distributed in the same way. You now have the option of creating a deployment directly as a FederatedDeployment instead. The YAML file for this is shown in Listing 3. It looks very similar to the one for a simple deployment, but the placement parameter ensures that the specified clusters also inherit the deployment.

Listing 3

Federated Deployment

apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: fedhttp
  namespace: fedtestns
spec:
  template:
    metadata:
      name: http
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
  placement:
    clusters:
    - name: earth
    - name: vulcan

After a short wait, you will now have a pod and a deployment with the web service on both clusters. The pods on both clusters can also be accessed on port 80. In the case of AWS, however, you also need to enable this port in the correct security group of the AWS configuration.

For the pods in the clusters to work together, it is crucial to set up routing between the two clusters so that they can reach each other. You also need to configure firewall rules and security groups appropriately. It is advisable here to rely on an automation tool such as Ansible if the cluster at the cloud provider's end will be on-demand only.

Like the deployment, you also need to roll out the service as a FederatedService, if you use it. Listing 4 shows the service for deployment from Listing 3, with service name resolution running locally; that is, fed-service resolves to the cluster IP in the local cluster. This feature is something to keep in mind when designing the service.

Listing 4

Federated Service

apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
  name: fed-service
  namespace: fedtestns
spec:
  template:
    spec:
      selector:
        app: httpbin
      type: NodePort
      ports:
        - name: http
          port: 80
  placement:
    clusters:
    - name: earth
    - name: vulcan

Components such as persistent volumes are also created locally on the clusters and must be populated there. Where pods access remote resources, you need to make sure they are accessible on both clusters. It is important to consider both IP accessibility and name resolution.

If you remove the federated resources, you remove the resources at the same time. To remove only one cluster from the deployment, remove the cluster under placement. You can remove the second cluster from the configuration with the command:

# kubefedctl unjoin vulcan --host-cluster-context earth --v=2

In the test, however, I did have to manually clean up the resources rolled out in the second cluster retroactively. You can save yourself this step by completely deleting the cluster from the commercial public cloud provider.

Conclusions

Kubernetes Cluster Federation makes it easy to link up one or multiple clusters as resource extensions. This solution doesn't mean you don't have to pay attention to what will be running where, and it is important to set up the extension such that the service's consumers can use it. In the case of a special promo in a web store, for example, the customers' requests need to reach the containers in the new cluster in the same way they did at the previously existing clusters; otherwise, federation will not offer you any performance benefits. If you expect such situations to occur frequently, it is advisable to run your own cluster as a federation with a single member. Then, you only need to complete the steps required for the extension.

The Author

Konstantin Agouros works as Head of Networks and New Technologies at Matrix Technology GmbH, where he and his team advise customers on open source, security, and cloud issues. His book Software Defined Networking (in German) is published by de Gruyter.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • A multicluster management tool for Kubernetes
    Multiple Kubernetes clusters with different distributions need a central management tool. One candidate is the open source Open Cluster Manager.
  • Kubernetes clusters within AWS EKS
    Automated deployment of the AWS-managed Kubernetes service EKS helps you run a production Kubernetes cluster in the cloud with ease.
  • Secure access to Kubernetes
    Kubernetes comes with a sophisticated system for ensuring secure access by users and system components through an API. We look at the options for authentication, authorization, and access control.
  • Nested Kubernetes with Loft
    Kubernetes has limited support for multitenancy, so many admins prefer to build multiple standalone Kubernetes clusters that eat up resources and complicate management. As a solution, Loft launches any number of clusters within the same control plane.
  • Rancher Kubernetes management platform
    Rancher has set up shop as an agile alternative to Red Hat OpenShift as an efficient way to manage Kubernetes clusters. In terms of the architecture, a Rancher setup differs significantly from classic Kubernetes.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=