Native serverless computing in Kubernetes

Go Native


If you are not at home in the cloud native world, you might not fully understand the central concepts of the solution. A few practical examples should help provide a better understanding of the ideas behind Knative as a whole. Conveniently, the Knative developers provide quite a few examples of various functions [2], so you do not have to search for them. One of these examples is a classic from the developer world: a web server that outputs Hello World! [3].

The example comprises two components and is written in Go. The helloworld.go file contains the service, which listens on port 80 and prints out Hello World! in an HTTP-compatible format (Figure 2) as soon as someone contacts the service. Far more important is the service.yaml service definition for Knative, because it is what makes the application palatable for Knative in the first place (Listing 1).

Listing 1

service.yaml for Hello World!

kind: Service
  name: helloworld-go
  namespace: default
      - image:{username}/helloworld-go
        - name: TARGET
          value: "Go Sample v1"
Figure 2: The Hello World! example only scratches the surface of what Knative can do. But the pod definition for the service is already far easier than a manual approach in Kubernetes. © Neeharika Kompala/GitConnected

Anyone who has at least looked at code snippets for Kubernetes will quickly understand: Initially, the code sample uses the Knative resource to create a service named helloworld-go in the default namespace. The service specification states that the helloworld-go image from (i.e., from Docker Hub) runs the service. The TARGET environment variable containing Go Sample v1 is also set up.

If you apply this file to Kubernetes with Knative, Knative launches the container in a pod through Kubernetes and makes it accessible from the outside. This is where it also becomes clear why Knative sees itself as a solution for operating serverless architectures: Many of the parameters you would have to specify in Kubernetes for a normal container without Knative support are autonomously implemented by Knative without user intervention. You can then focus on your applications without having to worry about the details of Kubernetes or running containers in it, and it also applies to operational tasks. Out of the box, for example, Knative scales the pod up or down as a function of the incoming load so that an appropriate number of instances of the respective pod are running – or none at all. The scaling behavior can be completely predefined.

A second example shows how events can be intercepted and processed in Knative (Figure 3). The entire wealth of Knative functions is available, such as automatic horizontal scaling or specific routing between Kubernetes and a service, right through to the use of external routing services.

Figure 3: Knative Serving provides a broker service for events that receives incoming events as a sink and forwards them to defined targets, which means that developers can build a system of events and responses into their services. © Knative

The standard example is slightly simpler. It is again based on Go and implements an HTTP service that opens a port and then waits on it for incoming messages. The implementation is for purely academic reasons and therefore is outside the box. If from this example you send the running service a name in the body of an HTTP request, you get back a Hello <Name>! , but – and this is the important bit – not because the service itself uses a function for the response. Instead, a request to the service triggers an event in the Knative API that passes the tool to a specially defined sink. In the example, the running application itself serves as the sink. It then responds on the code side by sending back a Hello with the matching HTTP body when a Knative event is received.

The key point here is the integration of the event agent and event sink into Kubernetes itself, which lacks the entire infrastructure for managing events – Knative provides this instead. The example from Listing 2 shows the complete implementation of the sample service.

Listing 2

service.yaml for Services

kind: Service
  name: cloudevents-go
  namespace: default
      - image: ko://
        - name: K_SINK
          value: http://default-broker.default.svc.cluster.local

Admittedly, these two examples do not come close to demonstrating the power offered by the Knative feature set. The project lists quite a few more examples in its documentation, including code for integration into Kubernetes and the application code. These examples give developers a better idea of the solution's capabilities. Specific examples of autoscaling applications can also be found.

Moreover, it is quite remarkable that Knative implements all the required functions itself on the basis of built-in K8s resources. Autoscaling, for example, exclusively relies on basic features that Kubernetes includes anyway. If you want to use Knative with external extensions (e.g., Istio) because you are already familiar with them, you will find instructions online that describe the steps. Knative is extremely powerful, but also very sociable with both internal and external solutions.

At this point, also remember that Knative launches quite a few services for its own operations in an active K8s cluster. The Activator plays the central role, receiving most of the requests from Knative CRDs and acting something like a central switchboard (Figure 4).

Figure 4: The Activator plays a crucial role in Knative. It fields most of the incoming commands and processes them or forwards them to the appropriate Knative services. © Knative

The Renegade Son

As mentioned at the beginning, Knative originally comprised three components: Serving, Eventing, and Building. The Building component, however, was something of an outsider from the very beginning. The Knative developers have always distinguished between operating applications and the build process, which is understandable from a logical point of view and completely correct if you think things out. Although the Serving and Eventing layers of Knative cannot be used without one another, the path and approach used to create the artifacts to be operated are basically irrelevant for Knative itself. Some time ago, the company got down to business and outsourced the Building component to a separate project; it has been operating as Tekton ever since.

However, to this day Tekton can't quite conceal its family ties to Knative. Under the hood, the component's architecture is similar to Knative's Serving and Eventing. Like Knative, Tekton extends an existing K8s cluster to include a number of CRDs – for creating and building applications in this case. Not to be outdone by its competitors in terms of marketing, Tekton developers now describe this integration as a CI/CD pipeline for cloud native environments.

The focus is still on serverless applications. Meanwhile, Tekton is also great to use to build other applications within Kubernetes. To do this, it relies on pipelines that it creates and configures in Kubernetes (Figure 5). Argo CD, for example, is a separate CI/CD system and does not have much to do with Tekton. However, Argo CD and Tekton can be teamed up to integrate artifacts created in Argo CD directly into Kubernetes.

Figure 5: Tekton implements pipelines and infrastructure at the K8s API level to build application images to run in Kubernetes. © IBM

If you then add Knative, you can perform party tricks, such as building an image on demand after a commit to a Git directory, with the image then being automatically rolled out to the production environment. The key factor that sets Tekton apart from quite a few competing products is the ability to build a container with a command to the K8s API. Unlike Argo CD, Tekton provisions all resources and infrastructure for upcoming build tasks independently and autonomously in Kubernetes. It also cleans up afterward, if desired.

Tekton closes a gap in this respect. When reading the paragraphs on Serving and Eventing, many experienced K8s admins may have been bothered that a ready-made Docker container and application must be available, which Knative then launches as an instance in the running cluster. Although it has little to do with CI/CD, it is precisely the CI/CD factor that plays a significant role in Kubernetes. Tekton builds the running container from the sources of a Docker image, which Serving and Eventing then process.


Knative proves to be a powerful tool and really turns up the heat when paired with Tekton. At the moment, this combination is the only way to achieve true CI/CD directly in Kubernetes. Other solutions might ultimately give you the same results, but it means operating the infrastructure outside of Kubernetes without the ability to control it with the Kubernets API. If you value a solution from a single source, Knative and Tekton are the right choice.

Boundless euphoria is not the order of the day, however. Many external tools offer functions that Knative cannot implement within Kubernetes because of the system. At this point, admins and developers need to test the available alternatives and choose the tool that best fits their own requirements. In many cases, this is likely to be Knative, but there are exceptions.

The Author

Freelance journalist Martin Gerhard Loschwitz focuses primarily on topics such as OpenStack, Kubernetes, and Chef.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=