Build operating system images on demand

Assembly Line

Push-Button Image

Whether you want an image for bare metal or for the cloud, at the end of the process, pressing the button at bottom left for creating the image is all it takes to start the automatic image building process (Figures 2 and 3). After a short wait, the browser then starts downloading the image, which can then be used on a USB stick, on a CD/DVD, or in the cloud.

Figure 2: creates images for the cloud in QCOW2 or AWS format or …
Figure 3: … installation images that equip a physical host with an operating system.

As mentioned, no hocus-pocus is taking place in the background; instead, the web interface calls fai-cd and fai-mirror or fai-diskimage behind the scenes and creates a matching image on the fly. Therefore, you can be absolutely sure that you always get the packages for the latest Debian GNU/Linux.

Unlike the big distributors, you decide when to build the image, although it means not using an official image, but one you build yourself with What Lange originally intended as a showcase for FAI and to give users an understanding of FAI's range of functions is itself a very practical tool.

Your Own Image Factory

To recap, has virtually no functionality of its own. The tool uses a preconfigured FAI installation in the background to build images on demand in line with FAI standards, which is the solution to a problem that many cloud providers face. Prebuilt cloud images are fine, but sometimes you need local modifications. If you offer special hardware in your cloud and want to pass it through to your users, you find yourself regularly building your own images.

As explained in the "Images or Automation?" box, this question is not trivial, especially if you don't have the right toolset at hand. FAI and, on the other hand, have proven to be very useful tools that can quickly form the basis of a local image factory that automatically outputs state-of-the-art disk images with special local modifications.

Images or Automation?

On the basis of my own experience, triggers two reactions that could hardly be more different. On the one hand, enthusiastic admins have needed a tool like this and had not yet found it. On the other hand, more conventional admins with backgrounds in automation have turned up their noses.

A conflict comes to light that plays an important role in contemporary IT. Does it make more sense to work with operating system images, or should you instead rely on the vendor's installation tools and use automation to make the required adjustments? Although this discussion is undoubtedly still in full sway, many assumptions and fears are based on obsolete knowledge.

Admins are absolutely right when they warn against monster images that cannot be regenerated when you need them. Companies commonly find that a golden master image for the installation of new systems has "grown historically": It works, but nobody in the company knows exactly what it contains. When a new image has to be built, it often involves massive overhead and consumes a huge amount of time.

The same applies to images you can pick up from alternative "black box" sources from the Internet. One thing you do not want in your data center is a pre-owned image with a built-in Bitcoin miner, although this is mostly discovered in the context of container images. However, the same caveat naturally also applies to images of entire operating systems.

By the way, when many admins think of images, they think of bare metal deployments. Because the local variance in this area is much higher than in defined environments such as KVM or VMware, many people in the past believed that monster images were legitimate or even necessary.

Like with a pendulum, a countermovement of tinkerers categorically reject OS images. Instead, its proponents say you should install Linux on your hard disk with AutoYaST, Kickstart, the Debian preseeding method, or whatever your distribution uses as an automatic installation tool. According to this narrative, then, the automation engineer handles the rest of the work.

However, this problem is easy to work around: Continuous integration and continuous delivery/deployment (CI/CD) environments based on Jenkins offer the ability to build OS images completely automatically. Of course, is also an approach to circumventing precisely the problem described. If you use to build your images, you can understand the process in detail, and if you so desire, you can also run in an instance of its own, which then contains local modifications – but in a comprehensible way.

The images built with can just as easily be frugal operating system images that simply prepare a host for use with Puppet, Ansible, or some other automation system. By the way, this is more elegant by several orders of magnitude than the automation structures that some administrators build themselves with scripting in Kickstart or AutoYaST or by preseeding.

One thing should be clear by now: Nothing works without operating system images. They are essential in clouds because virtual instances cannot be built and started without them. Installers from distributions are simply not viable alternatives, because the current clouds do not support the PXE boot functionality required in the first place.

In the end, as is often the case, a whole range of gray tones exist, and those admins who find a perfect mix of images on the one hand and automation on the other, will have a pleasing result. is a promising and well-proven component in such a context.

How It Works

To begin, you first set up FAI as if you wanted to use it for the live installation of nodes. Factors like DHCP can be ignored – the purpose is to create bootable media. After that, you already have the option to create your own images with fai-cd and fai-diskimage. But that's only half the battle. Users actually want to have this file embedded in a CI/CD process to ensure that images are automatically built when changes are made to the FAI configuration and that the images are then available for download from a central location.

Therefore, connecting FAI to a CI/CD tool such as Jenkins is a good idea, and this is exactly what the Debian project does. It stores its FAI configuration in Debian GitLab and uses hooks to wire it to an FAI installation in such a way that the described mechanism is implemented. When a commit ends up in the master branch of the repository, GitLab then ensures that new images are created automatically.

If you prefer not to overwrite the old images automatically, the recommendation is to encode the date in the name. The example with GitLab, in particular, is not difficult to set up if you make sure that GitLab has a virtual machine on which FAI is executable and that can access the GitLab repository itself to build images according to FAI rules.

Instead of laboriously developing an image factory yourself, it could be a good idea to turn to FAI, especially if the target system is Debian, with which FAI is particularly connected through its author.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=