© fotolia.com

© fotolia.com

Infrastructure as Code with Terraform

Geographically Diverse Failover in the Cloud

Article from ADMIN 46/2018
By
With the Terraform configuration management tools and the Amazon Route 53 DNS service, you can configure AWS to provide geographically diverse failover between two web servers.

Special Thanks: This article was made possible by support from  Linux Professional Institute

If you ask people within DevOps circles, they’ll tell you that Infrastructure as Code is the way forward. To create infrastructure automatically in a consistent and predictable manner, at least, the DevOps community has embraced a number of extensible tools.

Terraform from HashiCorp is one of the market leaders. It’s not a particularly new technological offering, in that it’s more than two years old, but it’s still absolutely key to running infrastructure in the cloud in an effective and efficient way.

Terraform can speak to multiple cloud providers, making it highly flexible and offering management teams a somewhat conditional promise of having the ability to run cloud-agnostic infrastructure. Cloud providers also provide their own tools, of course, such as CloudFormation from Amazon Web Services (AWS), but to my mind, Terraform is more popular for good reason.

Terraform comprises software known as configuration management tools, although you can still run server-specific code with Terraform when you’re creating servers, for example. One approach is to use a configuration management tool like Ansible or Puppet to alter the configuration on servers you’ve created with Terraform once the infrastructure is spun up.

In this article, I’ll look at some very simple uses of Terraform on AWS, along with the last bit of the jigsaw (i.e., testing the setup to make sure it is performing as designed), which I still need to get my head around and will at some point in the future turn into Terraform code.

I wrote this Terraform code because a thought occurred to me recently about the simplest (non-enterprise-oriented) way to provide failover for a personal website I was working on. The failover scenario was simple, and I just wanted a mechanism to check whether a service was available and then trip over to another server if a web server failed for some reason.

I was thinking along the lines of a geographically diverse failover, as opposed to a cloud server sitting next to the primary server (or in another Availability Zone in the same AWS region). Many a moon ago, this wasn’t as easy as it sounds and involved cabling between data centers or relying on (unreliable) third-party providers. However, I remembered that the Amazon Route 53 DNS service offered some useful DevOps-oriented tools and hacked out some basic code to get me started.

Duck Egg

The Amazon Route53 service is sophisticated, but some elements are embraced less than others.If you’re familiar with blue-green deployments, for example, you can “weight” your DNS traffic with Route53 to shift incoming requests from your customers between your blue or green environment when testing a new release. That’s a relatively simple example, but you might be surprised to find out that AWS provides some pretty clever DNS mechanisms (Figure 1).

Figure 1: AWS offers a number of routing policies, as found in their Developer Guide.

Hashtag Fail

The content delivery network-style geoproximity provision is very slick. The failover mechanism that I was ruminating over recently became available shortly after Route53 was introduced and was designed for a super-simple scenario (e.g., if A is down, then use B).

At this juncture, I’ll briefly remind you about the downsides of using DNS for failover: Internet access providers caching DNS records. In other words, there’s a high possibility that the changes reflected by Route53 updates won’t be picked up by broadband providers, mobile operators, and others in a consistent way, so you could lose some traffic until they are.

Against a Dark Background

Before looking more at the Route53 mechanism, I’ll show you some of my very basic terraforming. One caveat is the backward compatibility issues when it comes to different versions of Terraform, so for the sake of completeness, the versions I’m using are Terraform v0.11.7 and provider.aws v1.19.0.

A very brief “Getting Started” run-through for Terraform might be as follows:

1. Make sure your AWS key and secret key (from the AWS Identity and Access Management – IAM) work, and add them between the double-quotes as shown below on your command line to create environment variables (replace "KEY" and "SECRET-KEY"):

$ export AWS_ACCESS_KEY_ID="KEY" ; export AWS_SECRET_ACCESS_KEY="SECRET-KEY"

2. You then want to download and unzip the Terraform binary into your user path.

3. Once your code is in place, you run the following commands in the directory in which your code resides, checking for errors (some are cryptic!) as you go:

$ terraform init # set up your env and grab the correct cloud provider module
$ terraform plan # dry-run your code for the sake of being cautious
$ terraform apply # carefully create your infrastructure

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=