The practical benefits of network namespaces

Lightweight Nets

Building Bridges

Along with the handy tunnel through veth devices, bridges offer some advantages for those wanting to connect namespaces with the real network. Listing 8 shows how a virtual device with a real Ethernet device, eth0, can be connected. First, you delete one of the existing virtual devices and create it afresh in the default namespace. Because both devices are connected by the peer directive, veth1 automatically disappears.

Listing 8

Building a Bridge

01 $ ip netns exec ns1 ip link delete veth1
02 $ ip link add name veth1 type veth peer name veth2
03 $ ip link
04 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
05    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
06 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
07    link/ether 52:54:00:01:b0:24 brd ff:ff:ff:ff:ff:ff
08 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
09    link/ether 52:54:00:02:e3:f1 brd ff:ff:ff:ff:ff:ff
10 7: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
11    link/ether c6:27:2a:a8:06:ca brd ff:ff:ff:ff:ff:ff
12 8: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
13    link/ether 26:3e:ce:c5:de:de brd ff:ff:ff:ff:ff:ff
15 $ ip -f inet addr show eth0
16 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
17    inet brd scope global eth0
18       valid_lft forever preferred_lft forever
20 $ ip addr del dev eth0
21 $ brctl addbr br0
22 $ ip addr add dev br0
23 $ ip link set br0 up
24 $ ip -f inet addr show br0
25 9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
26    inet scope global br0
27       valid_lft forever preferred_lft forever
29 $ brctl addif br0 eth0
30 $ brctl addif br0 veth1
31 $ brctl show
32 bridge name   bridge id           STP enabled   interfaces
33 br0           8000.263ecec5dede   no            eth0

First, a Demolition

To add one of the Ethernet devices installed in the host to a bridge, you next delete and disengage your IP address. Line 15 displays the Ethernet device, and line 20 deletes the configuration. The command in lines 21 and 22 sets up a br0 bridge, and line 23 starts it. Next, the virtual device can be connected with the bridge, as shown in line 29, and verified in line 31.

For the final step, the veth2 device is moved into the ns1 namespace and gets an IP address before the devices are started:

$ ip link set veth2 netns ns1
$ ip netns exec ns1 ip addr add dev veth2
$ ip netns exec ns1 ip link set lo up
$ ip netns exec ns1 ip link set veth2 up

In the tests for this article, the final steps failed inexplicably on some systems; the veth2 interface seemed to be down, and the pings did not arrive. On other systems, on the other hand, everything functioned without complaint.

Cleaning Up

Because network namespaces are not persistent, if you completely mess up while playing around and testing, you can start again with a clean slate after a reboot. However, that also means you need a startup script so that your namespaces are preserved on production systems.

Without a reboot, though, you can also delete the configuration by recommitting the devices to the init_net namespace:

$ ip netns exec ns1 ip link delete veth1
$ ip netns exec ns1 ip link set eth1 netns 1
$ ip netns del ns1
$ ip netns del ns2

By deleting a namespace, you also automatically remove the entries in /var/run/netns; the operating system unmounts them and removes the mountpoints, which normally brings the real devices back into the default namespace. However, in some situations, it would not be appropriate: If you delete a namespace before processes inside it have ended, you are headed for trouble with mountpoints.

Normally, an error report will prevent this situation (mountpoint in use ). Unfortunately, the message does not always appear, in which case, the device belonging to a deleted namespace is lost along with the processes that were present.

You should therefore always go through the PIDs and manually kill the affected processes, if necessary, by going through the /proc/ directory and finding all the processes that belong to the network namespace in question with:

ip netns pids <namespace>

You can see this command used in Listing 9 (line 1). Line 4 kills processes from the ns1 namespace. You could also use identify, with the aid of the PID (line 6), to find the relevant namespace for each individual process. The monitor command (line 8) helps to understand what modifications the commands have made: The wider ranging the environment, the more helpful this approach becomes.

Listing 9

Manual Cleanup

01 $ ps auxww | grep $(ip netns pids ns1)
02 root  7811  0.0  0.0  46900  1024 ?   Ss   01:05   0:00   /usr/sbin/sshd -o PidFile=/run/ -o ListenAddress=
04 $ ip netns pids ns1 | xargs kill
05 $ ip netns del ns1
06 $ ip netns identify 1445
07 ns1
08 $ ip netns monitor
09 add ns1
10 add ns2
11 delete ns2
12 delete ns1

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • OCI containers with Podman
    The Podman alternative to Docker is a daemonless container engine with a run time that executes on request in root or user mode.
  • LXC 1.0
    LXC 1.0, released in early 2014, was the first stable version for managing Linux containers. We check out the lightweight container solution to see whether it is now ready for production.
  • Systemd network management and container handling
    Version 219 of the controversial systemd init system introduces a number of comprehensive changes. We take a closer look at the innovations in network management and container handling.
  • Operating system virtualization with OpenVZ
    The virtualization technology market is currently concentrating on hypervisor-based systems, but hosting providers often use an alternative technology. Container-based solutions such as OpenVZ/Virtuozzo are the most efficient way to go if the guest and host systems are both Linux.
  • networkd and nspawn in systemd
    Version 219 of the controversial init system, systemd, comes with a few major changes. We look at the new features in network management and container handling.
comments powered by Disqus