AWS Automation Documents

Automate AWS AMIs

Article from ADMIN 45/2018

AWS Systems Manager Automation documents let you customize your Amazon Machine Images to improve security and avoid config drift.

Special Thanks: This article was made possible by support from  Linux Professional Institute

Automation is the long-standing, presiding champion in any DevOps arena, and even more so in cloud environments, where the emphasis is on short-lived, ephemeral resources that can be safely discarded when they’ve run their course and completed their predetermined task.

When you’re faced with running a multitude of Amazon Elastic Compute Cloud (EC2) instances on Amazon Web Services (AWS), sometimes across multiple regions, upgrading packages and applying security patches on the operating systems (OSs) of your instances can be a daunting task. Once you’re content that your OSs are current and up to date and your installed packages are patched, you then have the task of customizing your Amazon Machine Images (AMIs) to suit in-house needs.

Tied in with how you customize your instances is config drift , a well-known phenomenon in DevOps circles. Whether you’re using a cloud or traditional data center infrastructure, servers all too easily become uniquely configured and somewhat “special.” By that, I mean they become beautiful Snowflakes , so called because snowflakes are apparently unique. Compared with other servers providing the same services, these special servers might have distinctly different scripts or applications, or they might have certain packages pinned to specific versions to keep everything else from breaking horribly. These unique characteristics cause issues on a number of levels, such as knowing what you’re allowed to update during patch runs or having to keep track of each snowflake’s idiosyncrasies in the event that an enterprise-wide issue (e.g., a new kernel) is urgently required because of a suddenly discovered nasty bug.

In this article, I walk you through the automation of much of the initial inception of your instances – that is, the creation of your bespoke AMIs. During the process, you’ll see how you can call manifests, scripts, or playbooks to customize your AMIs precisely to specific in-house preferences. You’ll also see automatic updates of OS packages, so you know that an instance will be patched and up to date at launch. If you get the manifests, scripts, or playbooks correct during this process, you can avoid config drift by running them every 20 minutes or so over the resulting instances – which introduces the holy grail: idempotency. Simply put, that means you can be absolutely assured that your server config is exactly as you intend it to be at startup. Each file, service, and variable is reset at each execution if it has changed. This process is great for avoiding config drift and, importantly, for security. The value of idempotency is dependent on the effort you put into the scripts you run to enforce it. I’m a big fan from a professional DevSecOps perspective.

To my mind, however, the most powerful aspect is that this process can be left to run automatically – hourly, daily, weekly, or using whatever schedule you like – and be trusted to provide exactly what you intended in the first place when you set it up (with detailed logs, if you’re in doubt): server images you can trust implicitly. In an additional step, I’ll also encrypt the main Amazon Elastic Block Store (EBS) volume for greater security.

Automation Is the Future

Someone clever recently said that you should employ people that can automate themselves out of a job and then give them a different job so they can do the same thing again. Although it’s an interesting thought, I can’t help but think of the film Terminator ! That aside, I’ll ask the mighty AWS Systems Manager Automation documents to step forward and provide a sensible level of automation.

Systems Manager Automation documents, a part of the Systems Manager Automation service in AWS (look under EC2 to find them quickly), provide a way of scripting using AWS data structures so that a predictable set of functions can be executed and ultimately provide output in a predictable way. There’s a mountain of docs on the AWS site, but although I’m generally a fan of the AWS developer docs, I have to admit I was disappointed with the mountain of docs available on the AWS site, because some did not provide the information I needed, which meant a lot of trial and error (e.g., setting up and patching a Windows AMI).

Baby Steps

The example AMI I’ll be using is Ubuntu 16.04 LTS (Xenial Xerus). You’ll need to bookmark the Ubuntu AMI ID page locator for future use. Before I dive in too quickly, though, I’ll look at an issue that needs a little forethought and then the workflow of what the Automation document will look like.

If you’ve already opened the AMI ID page locator link above, you will have noticed that each AWS region has its own AMI IDs. Automation documents run in each region (per AWS account), and although with some Identity Access Management (IAM) jiggery-pokery you can allow various services and users to access them, for each region you want to use, you will have to use different source AMI IDs for the Automation document to ingest. That means choosing an AMI ID for that region and changing (or programmatically calling) that different AMI ID in each region’s Automation document.

From the EC2 AMI locator page (Figure 1), I choose the top AMI ID that offers enhanced hardware integration (hvm, or hardware virtual machine) and faster solid-state drive (ssd) storage. Because I’m in the Dublin, Ireland, region (known as eu-west-1 ), I will use ami-4d46d534 as my source AMI ID from now on. In other words, when I begin my process, that is the ID of the AMI that I will reference. Starting with that image, I will then update and customize that AMI further, which will produce another AMI that my instances use. Through the journey of this process, other AMIs will be created, but ultimately I want an AMI ID of a final image that I can use that as my “golden source” – or “Grandfather” machine image, you might say – from then onward.

Don’t worry if that sounds confusing. Table 1 shows step-by-step what the process will achieve. If you need more detail, the AWS docs are very good at providing this bird’s-eye view. (Note, however, that I’ve added the encryption step separately.) Now that you see what steps are needed, I’ll set up an Automation document.

Figure 1: In this case, ami-4d46d534 is the source AMI ID for Dublin in the eu-west-1 region.

Table 1: Overview of Automation Process on AWS

Step Description Resources or Effect
1 The scheduler starts the ball rolling in cron job style. Amazon CloudWatch or similar.
2 A temporary AWS instance is spawned from the source AMI (i.e., the official region-specific Ubuntu AMI ID). A micro-instance (by default) spins up from the AMI.
3 A snapshot of that instance is created, which gets a new AMI ID and contains an encrypted root EBS storage volume for extra security. A new AMI ID after copying has completed.
4 Choose whether to inject your own pre-update script before you automatically update Ubuntu packages. Call a script using a URL and run it, if desired.
5 Update the OS packages with the very welcome option of excluding and including packages not already in the official Ubuntu AMI. This is pretty darn quick, because official AMIs from Ubuntu are already patched if using a new AMI ID. In Ubuntu’s case, good old Apt, dpkg, or Aptitude takes care of this step.
6 Next, I choose to fire an Ansible playbook to harden an AMI and add compliance, although you could equally choose to trigger Bash scripts or some other continuous integration/continuous deployment process. Simply pass a URL to a script in the Automation document and it will automatically chmod +x and executethe script.
7 The customized temporary instance is stopped and a snapshot is created. No more instance.
8 From the temporary instance’s snapshot, an AMI is created, which is given the resulting AMI ID. Clean-up and automatic termination of the instance.


On some Debian derivatives the Apt package manager holds (what I assume without looking at versions) an older version of the AWS command-line interface (CLI) package. If some features aren’t available when you try and run them, you should use Python’s package manager (i.e., pip ), which installs to your local user home directory. Either follow the notes below or reference the superabundance of docs online and put ~/.local/bin in your user $PATH in your environment’s Bash profile. After using pip to install the package,

$ pip install awscli

if the new path isn’t working, try running

$ . ~/.bashrc

or, on some Bash versions:

$ source ~/.bashrc

In your user shell profile, append this example path to the PATH setting in your user .bashrc file (separated by semicolons from whatever path was previously there):


Make sure that you also have access to the required AWS Access and Secret keys and then make them available in your ~/.aws/credentials file. Again, you can find more docs online if you get stuck at this stage.

Troubleshooting and Refinement

Once you’re up and running, if you get stuck with the customization scripts that the Automation document supports, have a look at the instance you’ve spun up (called PRE and POST hook scripts) in the /var/log/amazon/ssm/amazon-ssm-agent.log path.

I was pleasantly surprised to discover that the scripts pulled down by entering URLs in the Automation document (the PRE and POST hook scripts) are automatically cleaned up from /tmp and indeed have chmod +x added to them so that they execute correctly (Table 1, step 6).

You can pull them over HTTPS, and I keep meaning to try embedding a username and password (sometimes called Basic Auth in the URLs) for a little extra security. It goes without saying that you should figure out a way of securing access to your scripts (e.g., by IP address) if there’s anything remotely sensitive in them or potentially use a mechanism such as single-use tokens.

For other assistance, a reasonable starting point to the many AWS docs I mentioned previously might be from these the “Automation CLI Walkthrough: Patch a Linux AMI” page (look on the left navigation pane for related articles).

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=