Software-defined networking in OpenStack with the Neutron module

Neutron Dance

Using Public IP Addresses

Public IPv4 addresses are typically used in two places in OpenStack clouds: On the one hand, IP addresses are necessary in the context of the gateway, because a network namespace needs a public IP address to act as a bridge between a virtual customer network and the real Internet. On the other hand, many customers want to allow access to their VMs from the Internet via a public IP address.

How IPv4 management works depends on the SDN solution: In the default setup of Open vSwitch, the L3 agent is controlled by Neutron on the gateway node. It only creates virtual tap interfaces in the namespaces and then assigns a public IP.

Advanced solutions such as MidoNet or OpenContrail by Juniper can even speak BGP (see the box entitled "Border Gateway.")

Floating IPs

In the cloud context, IPv4 addresses that customers can dynamically allocate and return are called floating IPs. This concept regularly meets with incomprehension even from experienced sys admins: If your are accustomed to using the system configuration to give a machine a public IP address, you might feel slightly uncomfortable when you click on the OpenStack web interface.

However, there are good reasons for floating IPs: On the one hand, the floating IP concept lets you implement genuine on-demand payment; Customers thus really only pay for the IP addresses they need (Figure 3). Secondly, floating IPs keep down the overhead caused by network segmentation: In OpenStack, a network can be configured with a /24 prefix as a complete block. The network address, gateway, and broadcast address only need to be specified once. The remaining IP addresses can be used individually. Typical segmentation would waste several addresses for broadcast, network, and gateway addresses.

Figure 3: Floating IP addresses are dynamically added to VMs, which is useful for a billing by resource consumption.

A third benefit of floating IPs is that dynamic IP addresses allow processes such as updates within a cloud. A database can be prepared and preconfigured. Commissioning then merely means redirecting an IP address from the old VM to the new one.

On the technical side, the implementation of floating IPs can be derived from the preceding observations: On a pure Open vSwitch, the Neutron L3 agent simply configures the floating IP on an interface within the network namespace, which is assigned to the virtual client network with the target VM. BGP-based solutions use a similar approach and ultimately ensure that the packets reach the right host.

The situation is quite similar with DHCP agents: A DHCP-agent can only act on a virtual customer network if it has at least one foot on it. The hosts running the Neutron DHCP agent are therefore also part of the overlay. Network namespaces are not used – a network namespace with a virtual tap interface and a corresponding Open vSwitch configuration is created for each virtual network. A separate instance of the Neutron DHCP agent runs for each of these namespaces. Booting VMs issue a DHCP request in the normal way; this passes through the overlay and reaches the host running DHCP, where it receives a response.

Metadata Access

The fact that SDN sometimes does very strange things can be best described by reference to the example of metadata access in OpenStack. Amazon had the idea of creating a tool named cloud-init. The tool launches when a VM starts in the cloud and executes an HTTP request to the address 168.254.169.254 to retrieve information about its hostname or SSH-keys, which should allow access. The IP stated here is not part of the IP space of the virtual network created by the customer – and is consequently first routed to the gateway node.

The problem is: The Nova API service, which mostly runs on separate cloud controllers, provides the metadata. And these controllers have no connection to the cloud overlay, except if they happen to run the DHCP or L3 agent. The Neutron developers ultimately helped me with a fairly primitive hack: The gateway node runs a metadata agent that consists of two parts. The agent itself only intercepts the packets of the HTTP request and sends them to the metadata proxy, which finally passes them to the Nova API via a UNIX socket – directly in the cloud underlay. On the way back, the packets take the reverse route.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • OpenStack: Shooting star in the cloud
    OpenStack is attracting lots of publicity. Is the solution actually qualified as a cloud prime mover? We take a close look at the OpenStack cloud environment and how it works.
  • OpenStack: Shooting star in the cloud

    OpenStack is attracting lots of publicity. Is the solution actually qualified as a cloud prime mover? We take a close look at the OpenStack cloud environment and how it works.

  • Simple OpenStack deployment with Kickstack
    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.
  • Do You Know Juno?
    The OpenStack cloud platform plays a major role in the increasingly important cloud industry, so a new release is big news for cloud integrators and admins. The new version 2014.2 "Juno" release mostly cleans up and maintains the working model but adds a few innovations.
  • Kickstack: OpenStack with Puppet

    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=