Service mesh for Kubernetes microservices

Mesh Design

Istio Architecture and Components

Istio comprises the core data and control plane components it deploys as services in its own namespace and is shown in Figure 3 in a data plane/control plane arrangement. Third-party monitoring apps (Prometheus, Grafana, and Jaeger are included in the demos) are connected by means of adapters. The ingress and egress gateways control communications with the outside world.

Figure 3: Arrangement of containers in the default namespace (application) and istio-system namespaces.

Envoy [5] is the sidecar proxy that Istio injects into each application namespace container. Envoy provides a comprehensive set of data plane features, including load balancing and weighted routing, dynamic service discovery, TLS termination (meaning services within the application that don't encrypt their traffic can nonetheless enjoy encrypted communication tunneled via Envoy), circuit breaking, health checking, and metrics.

The Mixer component aggregates the metrics received from all the Envoy proxies and configures and monitors the proxies' enforcement of access control and usage policies.

Pilot translates traffic management and routing configuration into specific configurations for each sidecar. It maintains a "model" of the application/mesh for service discovery (i.e., it allows each sidecar to know what other services are available) and generates and applies secure naming information for each pod that is used to identify what service accounts are allowed to access a given service for authorization purposes.

Citadel is a certificate management and authentication/authorization for users accessing services and for services accessing each other. Citadel monitors the Kubernetes API server and creates certificate/key pairs for each application service, which it stores as Kubernetes secrets and mounts to a service's pods when they start up. The pods will then use them to identify themselves as legitimate providers of their service in future requests.

Galley ingests user-provided Istio API configurations, validates those configurations, and propagates them to the other Istio components.

The ingress and egress gateways are the perimeter proxies used for routing and access control of the application's external traffic. For a complete Istio deployment, these are used in place of the default Kubernetes LoadBalancer and NodePort service types.

Installing Istio on Kubernetes

To get started with Istio, you'll simply need a Kubernetes cluster hosting your microservice app and the Kubernetes command-line tool kubectl running locally. To replicate the demo in this article, you need to run kubectl from a terminal on a desktop, because the telemetry functions rely on accessing dashboards in your web browser by means of localhost port forwarding. If you want to play with Istio but don't have an application to use it with, fear not – the Istio download also contains a sample app.

For this demo, however, I want to try out a service mesh on a live website, so I upgraded my free IBM Cloud account to get the full public load balancer option on a fresh two-node Kubernetes cluster. Although the application will ultimately use Istio's ingress gateway, not the standard Kubernetes load balancer, I still need those real public IPs to make this work.

Next, visit Istio's website [4] and click the Get Started button at the bottom to reach the setup/Kubernetes page. Follow the instructions on the Download page to download and extract the latest release into a local directory, named istio-1.1.0 at the time of writing, and cd into that directory. In the website sidebar, click Install | Quick Start Evaluation Install to see how to install Istio in your cluster. The quick-start installation uses a demo profile that installs the full set of Istio services and some third-party monitoring apps, including Jaeger, Grafana, Prometheus, and Kiali. After installing the Custom Resource Definitions into your cluster, choose one of the two demo profile variants, shown as tabs on the quick-start page:

  • permissive mutual TLS – This option allows both plain text and mTLS traffic between services and is a safe choice for trying out Istio in an environment where the services in your cluster need to communicate with external (non-mesh) services that won't be able to participate in mTLS. This manifest can be found at install/kubernetes/istio-demo.yaml.
  • strict mutual TLS – As the name suggests, this option enforces mTLS for all traffic between services. It's safe to use this when all of your application's services reside in the cluster on which you're installing Istio, because each service will have a sidecar and be able to perform mTLS. This manifest is install/kubernetes/istio-demo-auth.yaml

For production installations, you are encouraged to use one of the Helm-based installation methods, described under the Customizable Install with Helm page in the sidebar. These also allow you to fine-tune the components and settings of your Istio installation by setting options as key/value pairs.

  • helm template – generates a manifest .yaml file representing your custom installation. This can then be applied to your cluster with kubectl.
  • helm install – deploys one of the supplied Helm charts (modified with any of your chosen options) direct to your cluster via the Tiller service, which has to be running on your cluster. Tiller is the server-side component of Helm, and together they work like a package manager for Kubernetes clusters, where the packages take the form of charts. This setup makes it easy for a team to manage a customized installation of Istio in a production environment. In some contexts, the elevated permissions that Tiller requires are considered a security risk, so another approach to team management of a production cluster configuration would simply be to put all of the required .yaml manifests under normal version control and deploy them with kubectl.

The demo manifests, combined with automatic sidecar injection, are sufficient in themselves to add value to any Kubernetes-deployed app. Without needing any extra changes or configuration, they will give you interservice authentication and the metrics, telemetry, and third-party apps Jaeger (request tracing), Prometheus (request metrics), and Grafana (time series graphs of metrics) right out of the box; they also define and automatically generate a comprehensive range of metrics (response times, rates of different HTTP response codes, node CPU usage, etc.). They will not implement any ingress gateway functions or any traffic management of inbound requests.

To take full advantage of Istio's features, significant configuration is necessary; the best way to ascend that learning curve is first to complete each of the Tasks listed on the Istio website and then review the Reference section for an understanding of the different object types, configuration options, and commands available (accessible in the sidebar). Covering them all is certainly beyond the scope of this article; instead, the following WordPress example shows how to configure and verify external security, traffic management, mutual authentication, and telemetry features.

For this demo, I installed the strict mutual TLS demo profile with the command:

$ kubectl apply -f install/kubernetes/istio-demo-auth.yaml

This command creates the istio-system namespace and all of the deployments, services, and Istio-specific objects that make up the Istio mesh. The large amount of output generated by this command gives a detailed view of what exactly is being installed into the cluster. After the command has completed, run

$ kubectl get pods -n istio-system

to check that the containers for Istio's components and third-party apps are running (or, in some cases, have already completed). Figure 4 shows what to expect at this stage. Next, examine Istio's services with

$ kubectl get services -n istio-system
Figure 4: After applying the istio-demo-auth.yaml manifest, check for new pods in the istio-system namespace.

Figure 5 shows the IP addresses and ports for each of Istio's services and the third-party apps installed by the demo manifest. Note the public IP address assigned to istio-ingressgateway ; after the ingress gateway is configured, this will be the application's public entry point.

Figure 5: Istio services.

Within the cluster, the web interfaces for the third-party apps (e.g., Grafana) are accessible on the ports listed in Figure 5. To access them from a local machine, the kubectl port forwarding functionality is used, which I demonstrate in the later sections about tracing and metrics.

At this stage, Istio's control plane is fully up and running; however, without a data plane to control, it's useless, so it's time to inject sidecars into the application's pods.

Automatic Sidecar Injection

Making all your pods members of your service mesh is the easiest option (to leave any pods out would somewhat defeat the object of a service mesh, anyway), and it's already enabled in the demo manifest and controlled by the setting sidecarInjectorWebhook.enabled: true, which will cause sidecars to be injected in any pod created in a namespace that has the label istio-injection=enabled applied to it. So, you need to add that label to your application namespace (the default namespace, in this case). Then check that the label is present:

$ kubectl label namespace default istio-injection=enabled
$ kubectl get namespace -L istio-injection

It doesn't matter whether you choose to install Istio before or after deploying an application. When Istio is installed, the automatic sidecar injection process ensures that the proxy is injected into every pod at creation time. To enable Istio on a preexisting deployment, you have to delete the existing pods in your default namespace after installing Istio and setting up automatic sidecar injection. When Kubernetes recreates the pods, the sidecar will be present and will start working automatically with the control plane.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=