Lead Image © cheskyw, 123RF.com

Lead Image © cheskyw, 123RF.com

OpenStack Trove for users, sys admins, and database admins


Article from ADMIN 40/2017
Trove brings DBaaS to OpenStack; however, the service needs meaningful configuration for optimum performance.

In the wake of cloud computing and under the leadership of Amazon, a number of as-a-service resources have for several years been cornering the market previously owned by traditional IT setups. The idea is quite simple: Many infrastructure components, such as databases, VPNs, and load balancers, are only a means to an end for the enterprise.

If your web application needs a place to store its metadata, a database is usually used. However, the company that runs the application has no interest in dealing with a database. A separate server, or at least a separate virtual machine (VM) together with an operating system, would need to be set up and configured for the database. Issues such as high availability increase the complexity. A database with a known login and address that the application can connect to would work just as well, which is where Database as a Service (DBaaS) comes in.

The advantage of DBaaS is that it radically simplifies the deployment and maintenance of the relevant infrastructure. The customer simply clicks in the web interface on the button for a new database, which is configured and available shortly thereafter. The supplier ensures that redundancy and monitoring are included, as well.

A DBaaS component for OpenStack named Trove [1] has existed for about three years. Although you can integrate it into an existing OpenStack platform, Trove alone is unlikely to make you happy. If you look into the topic in any depth, you will notice that vendors, users, and database administrators have to work hand in hand to create a useful service in OpenStack on the basis of Trove.

In this article, I tackle the biggest challenges that operating Trove can cause for all stakeholders. OpenStack vendors can discover more about the major obstacles in working with Trove, and cloud users can look forward to tips for the handling Trove correctly in everyday life.


Database performance, in particular, causes headaches for cloud providers for obvious reasons: Whereas databases in conventional setups are regularly hosted on their own hardware, in the cloud, they share the same hardware with many other VMs.

Storage presents an even greater challenge. A database, such as MySQL, running on real metal can connect to its local storage – usually a hard disk or fast SSD on the same computer – without a performance hit. However, VMs that run in clouds usually do not have local storage; instead, they use volumes that access network storage in the background.

A typical example is Ceph used as a storage back end for OpenStack. Each write operation on a VM results in multiple network reads and writes: The Ceph client on the virtualization server receives the write action and passes it to the primary storage device in the Ceph cluster – that is, in Ceph-speak, its primary Object Storage Device (OSD). This primary OSD then sends the same data in a second step to as many other OSDs as defined by its replication policy (Figure 1).

Figure 1: In a write operation, Ceph replicates data in the background and only delivers confirmation to the client if there are enough replicas.

Only when sufficient replicas are created in the Ceph cluster does the VM's Ceph client send confirmation that the write access was successful. The database client, which originally only wanted to change a single entry in MySQL, thus waits through several network round trips for the operation to complete successfully.

This problem is by no means specific to Ceph: Virtually all solutions for distributed storage in clouds have similar problems. Ceph stands out as a particularly bad example, because the Controlled Replication Under Scalable Hashing (CRUSH) algorithm, which calculates the primary OSD and the secondary OSDs, is particularly prone to latency.

From the provider's point of view, the problem is difficult to manage because a lower limit is clearly defined. Ethernet has an inherent latency that can only be reduced using latency-optimized transport technologies (e.g., InfiniBand), which means the provider chooses a different network technology that has its own challenges.

Paths and Dead Ends

Which approaches are open to a provider to achieve mastery over the topic of latency for DBaaS? The obvious approach is not to store VMs for databases from Trove on network storage, but to run them with local storage. In the OpenStack context, this means that the VM and its hard disk do not reside in Ceph or on any other network storage medium, but directly on the local storage medium of the computer node. In such a scenario, however, it is advisable to start the VM on a node with SSDs, because it offers noticeable performance gains with regard to throughput and latency.

The provider would have to configure their OpenStack to do this: Typically, they would set a separate availability zone with fast local storage and then give customers the opportunity to accommodate Trove databases there.

However, what looks like a good idea at first glance turns out to be a horror scenario on closer inspection. A VM that has been started in this way has no redundancy at all. If the hypervisor node with the VM fails, the VM is simply not accessible. If the disk on which the VM and the database are located fails, the data is lost and the user or provider can resort to a backup (which, one hopes, they have created).

Even if you do not assume the horror scenario of a hardware failure, this type of setup harbors more dangers than benefits for the vendor: If a VM only exists locally, it cannot be moved to another host without downtime – precisely the scenario in the everyday life of a cloud with hundreds of nodes, because, otherwise, the individual servers are virtually impossible to maintain. No matter how you look at it, VMs located on local storage of the individual hypervisor nodes are definitely not a good idea.

Evaluation Is Everything

Despite all the disadvantages of local storage, it is also clear that the latency of local storage can never be achieved with network-based storage, especially in the case of sequential writing. Anyone used to using MySQL on Fusion ioMemory (Figure 2) will almost always experience an unpleasant surprise when switching to a DBaaS database in the cloud.

Figure 2: Fusion ioMemory is the fastest storage that can be installed in servers. Latency differences are dramatic, depending on whether MySQL runs on Fusion ioMemory or on Ceph.

An area of conflict in which Cloud providers are practically always entangled is: What does the setup need to cover? Before a robust answer can be given, it is virtually impossible to find a suitable storage solution for databases in the cloud.

Many – especially small – setups for cloud customers impose minimal requirements on the database, so network-based storage would be perfectly fine. However, anyone who wants to run large setups with thousands of simultaneous database requests are in trouble. In the first step, the supplier therefore has to analyze the customer's needs to provide the basis for further planning.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Exploring OpenStack's Trove DBaaS
    DBaaS moves the database service to the cloud, promising a new database instance at the click of a mouse.
  • The new OpenStack version 2014.1 alias "Icehouse"
    The new OpenStack version "Icehouse" comes with new features and new components, on top of numerous improvements to existing components.
  • How to back up in the cloud
    In cloud computing practice, backups are important in several ways: Customers want to secure their data, and vendors want to secure the essential details of their platforms. Rescue yourself, if you can.
  • Ceph and OpenStack Join Forces

    When building cloud environments, you need more than just a scalable infrastructure; you also need a high-performance storage component. We look at Ceph, a distributed object store and filesystem that pairs well in the cloud with OpenStack.

  • The state of OpenStack in 2022
    The unprecedented hype surrounding OpenStack 10 years ago changed to disillusionment, which has nevertheless had a positive effect: OpenStack is still evolving and is now mainly deployed where it actually makes sense to do so.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=