Managing network connections in container environments

Switchboard

Traefik as Reverse Proxy

The term "reverse proxy" is probably familiar to administrators of conventional setups from the Nginx context. Nginx often acts as a tool in reverse proxy mode. Where an internal system is not allowed to have a foothold on the Internet, a reverse proxy is a good way of making it accessible from the outside. The reverse proxy can be located in a demilitarized zone (DMZ, perimeter network, screened subnet), for example, and expose a public IP address including an open port to the Internet. It forwards connections to this DMZ IP to the server in the background, giving you several options that would not be available without a reverse proxy.

With nftables (the successor of iptables), you can restrict access to certain hosts. Reverse proxies are also regularly used as SSL terminators: They expose an HTTPS interface to the outside world without necessarily having to talk SSL with their back end. The same applies to the ability to control access to a web resource via HTTP authentication. Both features are often used in web applications that do not offer these functions themselves.

Traefik as a Load Balancer

Mentally, the jump from a reverse proxy to a load balancer is not far. The load balancer differs from the reverse proxy in that it can distribute incoming requests to more than one back end and has logic on board to do this in a meaningful way, including different forwarding modes and the algorithms for forwarding.

Originally, Traefik was launched as a reverse proxy; it was therefore foreseeable that the tool would eventually evolve into a load balancer, and now Traefik Proxy can be operated not only as a proxy with a back end, but also as a load balancer with multiple target systems.

Adding Value in the Kubernetes Context

The functional scope of Traefik Proxy described up to this point matches that of a classic reverse proxy and load balancer: The service accepts connections to be forwarded to the configured back end in a protocol-agnostic way in OSI Layer 4, or specifically for HTTP/HTTPS at Layer 7.

Traefik's original killer feature was always its deep integration with Kubernetes: Anyone running their workload in Kubernetes did not need to worry about the proxy configuration – it would just come along as part of the pod definitions from Kubernetes. Moreover, the Traefik developers have added several features that make a lot of sense, especially in the Kubernetes context.

One part is that Traefik can act as an API gateway. Although the term is not defined in great detail, admins typically expect a few basic functions from an API gateway. At its core, it is always a load balancer, but it is protocol aware (hence OSI Layer 7). It not only understands the passing traffic but also intervenes with it at the request of the admin, such as upstream authentication and transport encryption by SSL, as already discussed.

Add to that Traefik's ability to function as an ingress controller in Kubernetes. Ingress controllers are a special prearrangement in the Kubernetes API for external components that are to handle incoming traffic. By making a service an ingress controller in its pod definitions, admins tell Kubernetes to push specific types of traffic or all traffic through that controller. This special feature is what characterizes the Traefik implementation as a first-class citizen, as discussed in more detail earlier.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=