The advantages of configuration management tools

Config Battle

Key-Value Distributed Stores

Basically all configuration databases dub themselves key-value stores. This applies, for example, to both Etcd and Consul; ZooKeeper also is a typical key-value store. However, if it were just a matter of maintaining a database to manage settings, an approach based on MySQL would probably be easier.

Virtually all subjects in the test aim to provide added value beyond key-value stores. Etcd and Consul promise particularly efficient tools for fleet management services. Consul, for example, comes with a built-in discovery service and thus quickly becomes a directory in scaled environments, wherein services register and deregister dynamically.

Ultimately, the promise is that these tools will bring an end to the typical static configuration files in /etc and substitute a dynamic cluster registry. The test candidates will have to prove they work in the test that follows.

Etcd from CoreOS

Etcd is essentially a by-product of CoreOS (i.e., of the micro-distribution that specializes in operating Docker containers). Like almost all CoreOS tools, Etcd is also based on the programming language Go. Its self-described characterization sounds unspectacular: "etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines" [1].

The Etcd front end is a simple design: an HTTP-based RESTful interface that can be operated using any standard web browser. If the administrator or a program connected to Etcd queries its values, they are returned in JSON format. The data structure in Etcd is hierarchical: A key may have different subkeys to which a value is then assigned. Multiple services can then use Etcd at the same time in a distributed setup without the various configuration entries getting in each other's way.

Full Cluster Compatibility

Etcd is cluster-capable: If standalone instances of Etcd are running in a setup on the hosts, they talk to each other in the background using a consensus algorithm based on Raft. The principle is simple: All Etcd instances select a master from among themselves that is the authority on all matters regarding clusters. If the selected master fails, a new selection process takes place automatically, and a new node takes over the regiment. An Etcd cluster that is already running acts as a discovery service for new instances that join the cluster. Instances added later simply ask which Etcd instances already exist and connect to them.

Only when first bootstrapping the cluster do you need to specify which instance is the master (Figure 1). As long as at least one Etcd instance is running, other Etcds can join or leave without bootstrapping again.

Figure 1: Etcd requires a bootstrapping process, which can take place in the manufacturer's Etcd directory or in a dedicated directory.

Etcd's cluster capabilities also include the quorum: Etcd automatically realizes when a cluster is broken into several parts and only continues to work in the cluster partition that knows the majority of all the existing Etcd instances behind it. The only potential for improvement is in terms of geo-clustering: Etcd is not able to handle multi-data-center installations because, with two sites, it is not possible to decide which site has the decision-making powers after they have lost their connection to each other.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=