Lead Image © Morganimation, fotolia.com

Lead Image © Morganimation, fotolia.com

The practical benefits of network namespaces

Lightweight Nets

Article from ADMIN 34/2016
By , By
With network namespaces, you can create very sophisticated and resource-saving setups using the tools inside a running Linux system – without the use of containers.

Linux Containers (LXC) [1] and Docker [2], as well as software-defined network (SDN) solutions [3], make extensive use of Linux namespaces, which allow you to define and use multiple virtual instances of the resources of a host and kernel. At this time, Linux namespaces include Cgroup, IPC, Network, Mount, PID, User, and UTS.

Network namespaces have been in the admin's toolkit, ready for production, since kernel 2.6.24. In container solutions, network namespaces allow individual containers exclusive access to virtual network resources, and each container can be assigned a separate network stack. However, the use of network namespaces also makes great sense independent of containers.

From Device to Socket

With network namespaces, you can virtualize network devices, IPv4 and IPv6 protocol stacks, routing tables, ARP tables, and firewalls separately, as well as /proc/net, /sys/class/net/, QoS policies, port numbers, and sockets in such a way that individual applications can find a particular network setup without the use of containers. Several services can use namespaces to connect without conflict to the same port on the very same system, and each is able to hold its own routing table.

A typical use case is avoiding asymmetrical routing – for instance, if you manage a server in a separate admin network via a separate interface because you want to keep administrative traffic away from the production network (Figure 1). A client that wants to address the admin interface of a server is (rightly) sent via the router, which would be impossible to achieve with classic routing tables for return traffic. This task is easier if the admin interface only exists in its own, self-contained network namespace and maintains its own routing tables.

Figure 1: Asymmetrical routing can be relatively easily avoided with namespaces.

In practice, virtual veth<x> network devices are fairly typical for network namespaces. Whereas a physical device can only exist in a network namespace, a pair of virtual devices can be interconnected as a bridge or work as a pipe; in this way, you can build a type of tunnel between namespaces you created on your host.

Namespaces API

For the following examples to function, the kernel needs to be compiled with the option CONFIG_NET_NS=y. The command

cat /proc/config.gz | gunzip - | grep CONFIG_NET_NS

tests whether this is so (Figure 2). The namespaces API comprises three system calls – clone(2), unshare(2), and setns(2) – as well as a few /proc entries. Processes in user space open files with /proc/<PID>/ns/ and use their file descriptors to stay in the relevant namespace. Namespaces identify the inodes generated by /proc during their creation.

Figure 2: The CONFIG_NET_NS kernel parameter must be switched on so that the /proc system can transport information via namespaces.

The clone system call normally creates a new process. However, if clone receives the CLONE_NEWNET flag as an argument along the way, it initiates not only a child process, but also a new network namespace. The new process becomes a member therein. The unshare call moves the calling process into a new namespace, landing there with a CLONE_NEWNET flag, and setns allows the calling process to join an existing namespace.

The handle for the network namespaces is /proc/<PID>/ns/net; since kernel 3.8, these are symbolic links. Its name comprises a string with the namespace type and the inode number:

$ readlink /proc/$$/ns/net

Bind mounts (mount --bind) keep the network namespace alive, even if all processes within the namespace have come to an end. If you open the file (or a file mounted there), you will be sent a file handle for the relevant namespace. You could then change the namespace with setns.

Configuring the Namespace

Linux applications able to handle namespaces spontaneously search for global configuration data with /etc/netns/<Namespace> and then with /etc. Numerous userspace tools can control interactions with network namespaces: ethtool, iproute2 (which also provides the ip management tool), iw for wireless connections, and util-linux.

Using namespaces is easy. For example, a very simple, isolated DNS resolver configuration sets up a separate network namespace named ns1:

sudo mkdir -p /etc/netns/ns1
echo nameserver | sudo tee /etc/netns/ns1/resolv.conf

This ability can also be extended to applications that do not provide any namespace capabilities. With these, you can simply create a mount namespace, managing in it all the files relevant for the network namespace in their usual locations (generally with /etc). This means that nothing conflicts with other processes, which provides some flexibility.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • OCI containers with Podman
    The Podman alternative to Docker is a daemonless container engine with a run time that executes on request in root or user mode.
  • Endlessh and tc tarpits slow down attackers
    Keep an attacker's connections open in an Endlessh "tarpit" or delay incoming connections with the more traditional rate-limiting approach of tc.
  • LXC 1.0
    LXC 1.0, released in early 2014, was the first stable version for managing Linux containers. We check out the lightweight container solution to see whether it is now ready for production.
  • Operating system virtualization with OpenVZ
    The virtualization technology market is currently concentrating on hypervisor-based systems, but hosting providers often use an alternative technology. Container-based solutions such as OpenVZ/Virtuozzo are the most efficient way to go if the guest and host systems are both Linux.
  • Systemd network management and container handling
    Version 219 of the controversial systemd init system introduces a number of comprehensive changes. We take a closer look at the innovations in network management and container handling.
comments powered by Disqus