Lead Image © Beboy, Fotolia.com

Lead Image © Beboy, Fotolia.com

Correctly integrating containers

Let's Talk

Article from ADMIN 41/2017
By
If you run microservices in containers, they are forced to communicate with each other – and with the outside world. We explain how to network pods and nodes in Kubernetes.

Kubernetes supports different ways of making containers and microservices contact each other, from connections with the hardware in the data center to the configuration of load balancers. To ensure communication, the Kubernetes [1] network model does not use Network Address Translation (NAT). All containers receive an IP address for communication with nodes and with each other, without the use of NAT.

Therefore, you cannot simply set up two Docker hosts with Kubernetes: The network is a distinct layer that you need to configure for Kubernetes. Several solutions currently undergoing rapid development, like Kubernetes itself, are candidates for this job. In addition to bandwidth and latency, integration with existing solutions and security also play a central role. Kubernetes pulls out all stops with the protocols and solutions implemented in Linux.

The Docker container solution takes care of bridges, and Linux contributes the IP-IP tunnel, iptables rules, Berkeley packet filters (BPFs), virtual network interfaces, and even the Border Gateway Protocol (BGP), among other things.

Kubernetes addresses networks partly as an overlay and partly as a software-defined network. All of these solutions have advantages and disadvantages when it comes to functionality, performance, latency, and ease of use.

In this article, I limited myself to a solution that I am familiar with from my own work. That doesn't mean that other solutions are not good; however, introducing only the projects mentioned here [2] would probably fill an entire ADMIN magazine special.

Before version 1.7, Kubernetes only implemented IPv4 pervasively. IPv6 support was limited to the services by Kubernetes. Thanks to Calico [3], IPv6 is also used for the pods [4]. The Kubernetes network proxy (kube-proxy) was to be IPv6-capable from version 1.7, released in June 2017 [5].

Kubernetes abstraction enables completely new concepts. Network policy [6] regulates how groups of pods talk with each other and with other network endpoints. With this feature, you can set up, for example, partitioned zones in the network. It relies on Kubernetes namespaces (not to be confused with those in the kernel) and labels. Abstraction in this way is complex to implement in a Linux configuration, and not every Kubernetes product supports the implementation.

Flannel

Flannel (Figure 1) [7] by CoreOS is the oldest and simplest Kubernetes network. It supports two basic principles by connecting containers with each other and ensuring that all nodes can reach each container.

Figure 1: A Flannel overlay network manages a class C network; the nodes typically use parts of a private network from the 10/8 class A network [8].

Flannel creates a class C network on each node and connects it internally with Docker bridge docker0 via Flannel bridge flannel0. The Flannel daemon flanneld connects the other nodes to the outside using an external interface. Depending on the back end, Kubernetes transports packets between pods of various nodes via VxLAN, by way of host routes or encapsulated in UDP packets. One node sends packets, and its counterpart accepts them and forwards them to the addressed pod.

Each node is given a class C network with 254 addresses. Kubernetes maps the address to the class B network externally. In terms of network technology, flanneld thus only changes the network masks from B to C and back. Listing 1 shows a simple Flannel configuration.

Listing 1

Flannel Configuration

01 [...]
02 {
03         "Network": "10.0.0.0/8",
04         "SubnetLen": 20,
05         "SubnetMin": "10.10.0.0",
06         "SubnetMax": "10.99.0.0",
07         "Backend": {
08                 "Type": "udp",
09                 "Port": 7890
10         }
11 }
12 [...]

At first glance, this concept looks robust and simple, but it quite quickly reaches its limits, because this network model generates a maximum of 256 nodes. Although 256 nodes is rather a small cluster for a project with the ambitions of Kubernetes, they can be easily integrated into the virtual private cloud (VPC) networks of some cloud providers. For more complex networks, Kubernetes supports a concept called container network interface (CNI) that allows the configuration of various versions of network environments.

Kubernetes with CNI

Listing 2 shows a sample configuration for CNI. In addition to the version number, it sets a unique name for the network (dbnet), determines the type (bridge), and otherwise lets the admin define the usual parameters, such as network masks, gateway, and a list of name servers, all with IP address management (IPAM), which integrates DNS and DHCP, among other things.

Listing 2

CNI Configuration

01 [...]
02 {
03   "cniVersion": "0.3.1",
04   "name": "dbnet",
05   "type": "bridge",
06   // type (plugin) specific
07   "bridge": "cni0",
08   "ipam": {
09     "type": "host-local",
10     // ipam specific
11     "subnet": "10.1.0.0/16",
12     "gateway": "10.1.0.1"
13   },
14   "dns": {
15     "nameservers": [ "10.1.0.1" ]
16   }
17 }
18 [...]

After launch, a container can successively use a number of plugins. In a well-defined chain, thanks to CNI, Kubernetes forwards the JSON output from one plugin to the next plugin as input. This procedure is described in detail at the CNI GitHub repository [9].

Calico Project

Calico [3] considers the ratio of nodes to containers. The idea is to transfer the concepts and relations between the data center and the host. The node is to the pod what a data center is to a host. In a data center, however, containers have overtaken hardware in terms of dynamics and life cycle by many orders of magnitude.

For Calico, you have two possible installation approaches. A custom installation launches Calico in the containers themselves, whereas a host installation uses systemd to launch all services. In this article, I describe the custom installation, which is more flexible but can result in tricky bootstrapping problems, depending on the environment and versions involved.

Calico uses CNI to set itself up as a network layer in Kubernetes. You can find a trial version with Vagrant at the Calico GitHub site [10]. In addition to configuration by CoreOS [11], you can configure and operate Calico in a custom installation using on-board tools provided entirely by Kubernetes. The actual configuration resides in a ConfigMap [12].

A DaemonSet [13] runs with Felix [14] on nodes, guaranteeing that the Calico node pod runs exactly once per node (Figure 2). Because Calico implements the Kubernetes network policy [6] with the Felix per-host daemon, Kubernetes launches a replica of a Calico policy controller [15].

Figure 2: Calico builds a network on the node with BGP and iptables, as in a data center.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Safeguard and scale containers
    Security, deployment, and updates for thousands of nodes prove challenging in practice, but with CoreOS and Kubernetes, you can orchestrate container-based web applications in large landscapes.
  • Layer 3 SDN
    Calico chooses an unusual approach for software-defined networking, relying on open standards like BGP. We look at the distinctions and advantages of Calico.
  • Simple, small-scale Kubernetes distributions for the edge
    We look at three scaled-down, compact Kubernetes distributions for operation on edge devices or in small branch office environments.
  • Kubernetes containers, fleet management, and applications
    Kubernetes is all the rage, but many admins find themselves struggling to get started. We present the basic architecture and the most important components and terms.
  • Nested Kubernetes with Loft
    Kubernetes has limited support for multitenancy, so many admins prefer to build multiple standalone Kubernetes clusters that eat up resources and complicate management. As a solution, Loft launches any number of clusters within the same control plane.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=