Lead Image © lassedesignen - Fotolia.com

Lead Image © lassedesignen - Fotolia.com

Running OpenStack in a data center

Operation Troubles

Article from ADMIN 40/2017
By
The third article in the OpenStack series deals with the question of how admins run OpenStack in a data center, what they need to consider in terms of planning, and what problems they are likely to face.

If you have tried OpenStack – as recommended in the second article of this series [1] – you may be thinking: If you can roll out metal as a service (MaaS) and Juju OpenStack within a few hours, the solution cannot be too complex. However, although the MaaS-Juju setup as described simplifies requirements in some places, it initially excludes many functions that a production OpenStack environment ultimately needs.

In the third part of this series, I look at how the lessons learned from a first mini-OpenStack can be transferred to real setups in data centers. To do so, I assume the deployment of a production environment with Juju and MaaS, accompanied by the unfortunate restriction that the solution costs money as of the 11th node that MaaS needs to manage.

Of course, it would also be possible to focus on other deployment methods. As an alternative, Ansible could be used on a bare-bones Ubuntu to roll out OpenStack, but then you would have the task of building the bare metal part yourself. Anyone who follows this path might not be able to apply all the tactics used in this article to their setup. However, most of the advice will work in all OpenStack setups, no matter how they were rolled out.

Matching Infrastructure

The same basic rules apply to OpenStack environments as for any conventional setup: Redundant power and a redundant network are mandatory. When it comes to network hardware in particular, you should plan big rather than attempting to scrimp from the outset. Switches with 48, 25Gb Ethernet ports are now available on the market (Figure 1), and if you want a more elegant solution, you can set up devices with Cumulus [2] and establish Layer 3 routing. Each individual node uses the Border Gateway Protocol (BGP) to distribute the routes to itself on the entire network. This fabric principle [3] is used, for example, by major providers such as Facebook or Google to achieve an optimal network setup.

Figure 1: Forty-eight port switches with 25Gb Ethernet per port are now standard and recommended for a new OpenStack setup.

If you are planning a production OpenStack environment, you should provide at least two availability zones, which can mean two rooms within the same data center, but located in different fire protection zones. Distributing the availability zones to independent data centers would be even better. This approach is especially important because an attractive Service Level Agreement (SLA) can hardly be guaranteed with a single availability zone.

Matching Hardware

One challenge in OpenStack is addressing hardware requirements. A good strategy is to allocate the required computers to several groups. The first group would include servers running controller services, which means all OpenStack APIs as well as the control components of the software-defined network (SDN) and software-defined storage (SDS) environments. Such servers do not need many disks, but they must be of high enough quality to support 10 drive writes per day (DWPD) without complaint. Add to this the hypervisor nodes on which virtual machines (VMs) run: They need a good portion of CPU and RAM, but the storage media are unimportant.

Whether the planned setup will combine storage and computing on the same node (hyper-converged) is an important factor. Anyone choosing Ceph should provide separate storage nodes that feature a correspondingly large number of hard disks or SSDs and a fast connection to the local network. CPU and RAM, on the other hand, play a subordinate role.

Finally, small servers that handle tasks such as load balancing do not require large boxes. The typical pizza boxes are usually fine, but they should be equipped with fast network cards.

Modular System

Basically, the recommendation is to define a basic specification for all servers. It should include the type of case to be used, the desired CPU and RAM configuration, and a list of hard disks and SSD models that can be used. Such a specification makes it possible to build the necessary servers as modular systems and request appropriate quotations. However, be careful: Your basic specification should also take into account eventualities and give dealers clear instructions in points of dispute.

If you want to build your setup on UEFI, the network cards may need different firmware from those normally installed by the vendor. If you commit the dealer to supplying the servers with matching firmware for all devices, you can save a huge amount of work. However, if you need to install new firmware on two cards per server, it will take at least 20 minutes per system. Given 50 servers, this translates to a huge amount of time.

Storage is of particular importance: If you opt for SDS, you will not normally want RAID controllers, but normal host bus adapters (HBAs). You should look for a model supported by Linux and include it in the specification. You will also want to rule out hacks like SAS expander backplanes. Many fast SAS or SATA disks or SSDs do not help you much in storage nodes if they have a low-bandwidth connection to their controllers. For storage nodes, it is ideal for each disk to be attached directly to a port on the controller, even if additional HBAs are required.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=