Running OpenStack in a data center

Operation Troubles

Many Question Marks

The technically most elegant approach is undoubtedly OpenContrail, which relies on open, standardized protocols such as BGP and multiprotocol label switching (MPLS), thus enabling a smooth and well-integrated SDN in OpenStack. In practice, however, OpenContrail fails because of its enormous complexity, and earlier OpenContrail versions did not perform well in terms of stability. Finally, OpenContrail cannot be used with just any old OpenStack cloud because it imposes strict requirements on the setup, and packages are not available for all standard distributions.

Anyone who is currently building an OpenStack cloud is likely to turn to a solution based on Open vSwitch. The bad news is that a clear recommendation for a product is simply impossible. The selection of SDN solutions is huge. MidoNet [6] (Figure 2) by Midokura, VMware's NSX [7], and various other products are admin favorites right now. In the end, your only option is to evaluate different solutions and take all factors into consideration.

Figure 2: MidoNet by Midokura is an SDN solution based on Open vSwitch that integrates well into OpenStack.

The question is whether an SDN solution suits the intended deployment scenario without problem. Also, the features offered are sometimes very different. Factors such as stability and the ability to upgrade from one major version of the SDN solution to another also contribute to the worries. Last but not least, price is important: Almost all prominent SDN solutions cost money, burdening your budget, and require time to evaluate.

The importance of the initial decision in favor of a particular SDN environment cannot be overstated, because it cannot be replaced in a running OpenStack environment. Solutions such as MidoNet, NSX, or PLUMgrid typically store their configurations in a custom format in a database that other services cannot parse. Once you've rolled out MidoNet, you need to live with it – or rebuild the cloud based on another SDN environment and accept the fact that you lose a customer's entire network configuration.

Fewer Problems with SDS

Like networks, storage presents new challenges in the cloud. In conventional setups, centralized storage connected to the rest of the installation via NFS, iSCSI, or Fibre Channel is quite common. Nothing is wrong with these kinds of network storage, even in clouds, except that they do not usually scale well and come with large price tags.

In addition to storage for the VMs, a second storage service is usually required in clouds for storing binary objects (object storage). At least two protocols are available: OpenStack Swift and various Amazon S3 look-alikes. From an admin's perspective, it is desirable to offer both storage types – storage for VMs and for objects – using the same storage service. Typical storage area networks (SANs) do not offer object storage, so you must find another solution.

In contrast to the SDN scenario, a de facto standard has established itself in recent years: Ceph [8] dominates large OpenStack clouds in virtually every respect. Red Hat has done much in recent years to make the solution known and popular: In addition to numerous new features, much has also been done with regard to stability.

Additionally, Ceph is very versatile: Via the RBD interface, it acts as back-end storage for VMs thanks to its native connection to OpenStack. With the aid of the RADOS gateway, which is now known as Ceph Object Gateway, it uses HTTP(S) to deliver binary objects in line with the Amazon S3 or OpenStack Swift protocol. Because Red Hat has shortened the CephFS feature list, even the Posix-compatible file system is available as a kind of replacement for NFS.

Seamlessly Scalable

Ceph's greatest advantage is still cluster scalability virtually without limits during operation. When space runs out, you simply add an appropriate number of additional servers to create more space.

Because Ceph now supports erasure coding, the old rule of thumb "gross times three equals net" no longer applies: The overhead for replication can be reduced significantly, but if you do use erasure coding, you need to be prepared to compromise: The recovery of failed nodes takes significantly longer and generates significant load on the CPU and RAM of the storage nodes.

The biggest advantage of Ceph is undoubtedly that Red Hat provides the software free of charge and ready for different distributions. On standard Ubuntu systems, packages provided directly by the vendor can be installed and used. As explained in the second article of the series, Ubuntu's Autopilot based on MaaS and Juju is the tool of choice for Ceph deployment: If you select Autopilot Ceph and then assign the Ceph roles to appropriate computers, the result at the end of the setup will be a working Ceph cluster that is connected directly to your OpenStack installation (Figure 3).

Figure 3: Ubuntu's Autopilot is designed for Ceph and installs a complete Ceph cluster at the push of a button. No extra installation work is needed for storage.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=