Software-defined networking in OpenStack with the Neutron module

Neutron Dance

Inspecting the Agents

What exactly do the individual Neutron agents on the hosts do? How does a virtual switch work in concrete terms?

Clouds distinguish between two types of traffic. Local traffic is traffic flowing back and forth between two VMs. The big challenge is to implement virtual networks across node borders. If such multiple VMs of a customer are running on nodes 1 and 2, they need to be able to talk. The other kind of traffic is Internet traffic: These are VMs that want to talk to the outside world and receive incoming connections.

The two concepts of overlay and underlay are central in the SDN cloud context. They refer to two different levels of the network: Underlay means all network components that are required for communication between the physical computers. Overlay sits on top of the underlay; It is the virtual space, where customers create their virtual networks. The overlay thus uses the network hardware of the underlay. In other words, the overlay uses the same physical hardware that the services on the computing node use to communicate with other nodes.

To avoid chaos, all SDN solutions on the market support at least some type of encapsulation: For example, you can use Open vSwitch with VxLAN or Generic Routing Encapsulation (GRE). VxLAN is an extension of the VLAN concept intended to improve scalability. GRE stands for generic routing encapsulation and offers very similar functions.

Important: Neither of these solutions supports encryption; they are therefore not virtual private networks (VPNs). The only reason for encapsulation in the underlay of a cloud is to separate the traffic of virtual networks from the traffic of the tools on the physical host.

ID Assignments in the Underlay

For local traffic to flow from the sender to the receiver, Neutron agents enable virtual switches on the computing nodes. These switches make the Open vSwitch kernel module available. At the host level, virtual switches look like normal bridge interfaces – however, they have a handy extra feature: They let you add IDs to packets. A fixed ID is associated with each virtual network that exists in the cloud. The virtual network adapters of the VMs that are connected to the virtual switch during operation stamp each incoming package with the ID of the associated network.

The process works much like tagged VLANs. But in the SDN cloud context, the switch that applies the tag is the bridge that is connected to the VM. And because agents within the cloud access the same configuration, the IDs used for specific networks are the same on all hosts.

On the other side of the bridging interface is encapsulation, that is, typically GRE or VxLAN. Although both protocols build tunnels, they are stateless: A full mesh network of active connections is not created. SDN solutions use the encapsulated connections to handle traffic with other hosts. Traffic for VMs on another node is always routed through the tunnel.

A concrete example will contribute to a better understanding: Assume a VM named db1 belongs to Network A of customer 1. The NIC is connected to the Open vSwitch bridge on the computing node. The port of the VM on that bridge has an ID of 5. All the packets that arrive at the virtual switch through this port are thus automatically tagged with this ID.

If the VM db1 now wants to talk to the server web1, the next step depends on where web1 is running: If it is running on the same computing node, the packet arrives at the db1 port on the virtual switch and leaves via the port of the VM web1 – which has the same ID. If web1 is running on another system, the package enters the GRE or VxLAN tunnel and reaches the virtual switch. The virtual switch checks the remote ID and finally forwards the packet to the destination port.

Virtual switches do two things: They uphold strict separation between the underlay and overlay through encapsulation, and they use a dynamic virtual switch configuration to ensure that packages from a virtual network only reach VMs that are connected to the same network. The task of Neutron's SDN-specific agent (L2 agent) is to provide the virtual switches on the host with the appropriate IDs. Without these agents, virtual switches would be unconfigured and therefore unusable on all hosts.

The described system is sufficient for managing and using VLANs.

Similar Approach for Internet Traffic

The best VM is of little value if you can't access the Internet or if it is unreachable from the outside. Virtually all SDN approaches stipulate that flows of traffic from or to the Internet use a separate gateway, which is configured directly from the cloud. The gateway does not even need to be a separate host; the necessary software can run on any computing node as long as it is powerful enough.

"Software," in this case, means at least network namespaces and in most cases also a Border Gateway Protocol (BGP) daemon of its own. Basically, a typical OpenStack gateway is attached to the underlay quite normally; for those networks for which virtual gateways were configured at OpenStack level, a virtual network interface that is attached to a virtual switch in the overlay also exists on the gateway. Naturally, if you want to route packets into the Internet, this traffic must be separated in the overlay from the traffic of other virtual networks.

Border Gateway

Networking solutions that support the BGP border router protocol offer additional options. Admins can connect their own border gateways to the Internet. The software gateway only forwards announcements to the border gateway for those IPv4 addresses that are actually used in the cloud. A BGP-based solution offers more possibilities, but of course, it is also clearly more complex and requires more knowledge on the part of the admin. MidoNet handles the topic in a very smart way: The Quagga BGP daemon is tightly interwoven with MidoNet; it is configured directly from within MidoNet, which means you do not need to manage it separately.

In the final step, network namespaces help to assert traffic separation between virtual networks en route into the Internet. Network namespaces are a feature of the Linux kernel, allowing virtual network stacks on the host that are separate from each other and from the main network stack of the system.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • OpenStack: Shooting star in the cloud
    OpenStack is attracting lots of publicity. Is the solution actually qualified as a cloud prime mover? We take a close look at the OpenStack cloud environment and how it works.
  • OpenStack: Shooting star in the cloud

    OpenStack is attracting lots of publicity. Is the solution actually qualified as a cloud prime mover? We take a close look at the OpenStack cloud environment and how it works.

  • Simple OpenStack deployment with Kickstack
    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.
  • Do You Know Juno?
    The OpenStack cloud platform plays a major role in the increasingly important cloud industry, so a new release is big news for cloud integrators and admins. The new version 2014.2 "Juno" release mostly cleans up and maintains the working model but adds a few innovations.
  • Kickstack: OpenStack with Puppet

    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=