The practical benefits of network namespaces

Lightweight Nets

Pivot and Focal Point

Only rarely does the admin need to dive deep into the details, because the ip netns command simplifies handling. The command comes with the iproute2 package, and administers the creation and deletion of network namespaces, as well as the distribution of resources between them. Root privileges, or course, are necessary; all commands described here only function properly under the administrator.

The tool's syntax is simple, and fits comfortably with classic ip logic. The commands

ip netns add ns1
ip netns list
ip netns del ns1

add the ns1 namespace, display existing namespaces, and delete ns1 (Figure 2). The configuration of networks in a namespace can be linked optionally to the ip command, as would also occur on a single-namespace device. Only the exec <namespace> command separates the two parts.

Initially, the complete command looks quite difficult, but the logic behind it is simple. It allows applications that can only operate within one namespace to reach the target without diversions. The entry

ip netns exec ns1 ip link set lo up

creates a loopback interface in the ns1 namespace. The command

ip netns exec ns1 ip route show

displays the routing tables in the namespace. However, these are still empty at this moment, which is why calling up a DNS query with

ip netns exec ns1 dig -t MX @8.8.8.8 suse.com

still does not produce a result (Figure 2). Figure 3 shows many network devices in the operating system at hand, although with only a single loopback in the ns1 namespace. You can leave the namespace using exit to return to the previous environment.

Figure 3: The admin starts a shell in the ns1 namespace, in which only the previously created loopback interface is present.

Bearing in mind the somewhat wider ranging configuration tasks now facing admins, one available option is to open a shell with ip to submit several commands (Listing 1) consecutively in the namespace.

Listing 1

Configuring eth1

ip netns exec ns1 bash
ip link set eth1 up
ip addr add 192.168.1.123/24 dev eth1
ip -f inet addr show
exit

More Examples

Figure 4 shows other simple examples of using namespaces. Whether creating another namespace, displaying a namespace, or displaying the structure of a namespace with /var, these tasks are child's play:

$ ip netns add ns1
$ ip netns add ns2
$ ip netns list
ns2
ns1
$ tree /var/run/netns
/var/run/netns/
|---ns1
|---ns2
Figure 4: Handling namespaces comes naturally to an experienced Linux admin.

The mount command shows the newly set mountpoints (Listing 2).

Listing 2

View Mountpoints

$ mount | grep netns
tmpfs on /run/netns type tmpfs (rw,nosuid,nodev,mode=755)
proc on /run/netns/ns1 type proc (rw,nosuid,nodev,noexec,relatime)
proc on /run/netns/ns1 type proc (rw,nosuid,nodev,noexec,relatime)
proc on /run/netns/ns2 type proc (rw,nosuid,nodev,noexec,relatime)
proc on /run/netns/ns2 type proc (rw,nosuid,nodev,noexec,relatime)

If you examine the boot procedure of your Linux system closely, you will notice that it has already created an init_net namespace during bootup. You will then receive the assigned loopback interface, along with all the physical devices and sockets. Only the loopback device contains the newly created namespace:

$ ip netns exec ns1 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

The set command adds a device that already exists in a host system into a namespace. However, because it is exclusive, the device disappears from then on in the host namespace. In the ns1 namespace, you must first configure the device (Listing 3). This system has two network cards, one of which is now exclusively assigned to ns1. As a whole, it functions very rigorously, as a look into the sysfs virtual filesystem demonstrates:

$ tree /sys/class/net
/sys/class/net
|---eth0 -> ../../devices/pci0000:00/0000:00:03.0/net/eth0
|---lo -> ../../devices/virtual/net/lo

The device is no longer listed in the default namespace.

Listing 3

Configuring Devices

$ ip link set eth1 netns ns1
$ ip netns exec ns1 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:02:e3:f1 brd ff:ff:ff:ff:ff:ff
$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:01:b0:24 brd ff:ff:ff:ff:ff:ff

The asynchronous routing example in Listing 4 is complete within a few steps. Here, you define interfaces, addresses, and routes, have the results displayed, and put it through a ping test. The routing table, as defined in line 12, is only present in the ns1 namespace. With /etc/netns/ns1,

$ mkdir -pv /etc/netns/ns1
mkdir: created directory ?/etc/netns?
mkdir: created directory ?/etc/netns/ns1?
$ echo 1.2.3.4\ mytest | tee /etc/netns/ns1/hosts
$ ip netns exec ns1 getent hosts
1.2.3.4     mytest

you can also manage your own configurations.

Listing 4

Isolating the Network

01 $ ip netns exec ns1 ip link set lo up
02 $ ip netns exec ns1 ip link set eth1 up
03 $ ip netns exec ns1 ip addr add 192.168.1.123/24 dev eth1
04 $ ip netns exec ns1 ip -f inet addr show
05 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
06    inet 127.0.0.1/8 scope host lo
07       valid_lft forever preferred_lft forever
08 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
09    inet 192.168.1.123/24 scope global eth1
10       valid_lft forever preferred_lft forever
11
12 $ ip netns exec ns1 ip route add default via 192.168.1.1 dev eth1
13 $ ip netns exec ns1 ping -c2 8.8.8.8
14 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
15 64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=22.1 ms
16 64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=20.1 ms

Namespace Discussion

Virtual Ethernet devices are the means through which resources from different namespaces communicate with one another. They always come in pairs and function like a pipe, in that everything sent by the operating system to a veth comes back to the other side (the peer). The process documented in Listing 5 demonstrates how this works.

Listing 5

Virtual Ethernet Devices

01 $ ip netns exec ns1 ip link add name veth1 type veth peer name veth2
02 $ ip netns exec ns1 tree /sys/class/net
03 /sys/class/net
04 |---eth1 -> ../../devices/pci0000:00/0000:00:08.0/net/eth1
05 |---lo -> ../../devices/virtual/net/lo
06 |---veth1 -> ../../devices/virtual/net/veth1
07 |---veth2 -> ../../devices/virtual/net/veth2
08
09 $ ip netns exec ns1 ip link set dev veth2 netns ns2
10 $ ip netns exec ns2 ip link
11 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default
12    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
13 2: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
14    link/ether 8e:1b:5d:87:62:db brd ff:ff:ff:ff:ff:ff
15
16 $ ip netns exec ns1 ip addr add 1.1.1.1/10 dev veth1
17 $ ip netns exec ns2 ip addr add 1.1.1.2/10 dev veth2
18 $ ip netns exec ns1 ip link set veth1 up
19 $ ip netns exec ns2 ip link set veth2 up
20
21 $ ip netns exec ns1 ping -c2 1.1.1.2
22 PING 1.1.1.2 (1.1.1.2) 56(84) bytes of data.
23 64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.021 ms
24 64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.022 ms

At the beginning, you create two virtual network devices – veth1 and veth2 – then use the readout from line 2 to see whether they are present in the namespace. In line 9, veth2 is moved into the correct namespace (i.e., ns2). This second namespace will contain the loopback and the peer interface (lines 10-14).

For communication between the peers to function, both naturally need IP addresses (lines 16 and 17). Now you are able to start the two devices and test the connectivity (lines 18-21). Clearly, the connection works. If you have connected the namespaces with a physical interface, you can also work with bridges.

The next step should show how an SSH server can only be made available within this namespace. The command

$ ip netns exec ns1 /usr/sbin/sshd -o PidFile=/run/sshd-ns1.pid -o ListenAddress=1.1.1.1

starts the SSH daemon in the ns1 namespace and forwards a PID file and an IP address on which to eavesdrop. The PID file is necessary to distinguish this SSH server service from the instance also running in the global namespace. The second SSH server only eavesdrops on 1.1.1.1, the IP of the veth1 interface, which is only available in the ns1 namespace (Listing 6).

Listing 6

Eavesdropping

$ ps -ef | grep $(cat /run/sshd-ns1.pid)
root  7387  1  0  00:13 ?   00:00:00 /usr/sbin/sshd -o PidFile=/run/sshd-ns1.pid -o ListenAddress=1.1.1.1
$ ip netns exec ns1 ss -ltn
State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
LISTEN  0       128           1.1.1.1:22               *:*

For the last test, an SSH session from ns1 to ns2, one small detail proves useful: because bash is running inside the ns1 session, you can also configure the namespace there without prefixing the ip netns exec commands (Listing 7).

Listing 7

SSH in Namespaces

$ ip netns exec ns2 ssh 1.1.1.1
$ ip -f inet addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.1.123/24 scope global eth1 valid_lft forever preferred_lft forever
4: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 1.1.1.1/10 scope global veth1 valid_lft forever preferred_lft forever
$ ss -etn
State Recv-Q Send-Q  Local Address:Port  Peer Address:Port
ESTAB 0       0              1.1.1.1:22          1.1.1.2:40412   timer:(keepalive,109min,0) ino:34868 sk:ffff880036f8b4c0 <->

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • OCI containers with Podman
    The Podman alternative to Docker is a daemonless container engine with a run time that executes on request in root or user mode.
  • LXC 1.0
    LXC 1.0, released in early 2014, was the first stable version for managing Linux containers. We check out the lightweight container solution to see whether it is now ready for production.
  • Systemd network management and container handling
    Version 219 of the controversial systemd init system introduces a number of comprehensive changes. We take a closer look at the innovations in network management and container handling.
  • Operating system virtualization with OpenVZ
    The virtualization technology market is currently concentrating on hypervisor-based systems, but hosting providers often use an alternative technology. Container-based solutions such as OpenVZ/Virtuozzo are the most efficient way to go if the guest and host systems are both Linux.
  • networkd and nspawn in systemd
    Version 219 of the controversial init system, systemd, comes with a few major changes. We look at the new features in network management and container handling.
comments powered by Disqus