Photo by Harry Cunningham on Unsplash

Photo by Harry Cunningham on Unsplash

IT automation with SaltStack

Always on Command

Article from ADMIN 63/2021
By , By
SaltStack is a fast and reliable modular toolbox written in Python that contains ready-made modules for many configuration management purposes.

In 2013, we were faced with a move to a new data center, combined with a change in infrastructure – away from classic bare-metal servers to well-dimensioned virtual machines (VMs). Until then, Puppet was used for configuration management, which brought with it many architecture-related problems and shortcomings. On the other hand, not all Puppet manifests could be immediately replaced, which is why the only solution was one that could be used in parallel.

After some comparisons, the decision was made to go with SaltStack, which even then came with its own Puppet module, allowing us to control Puppet from within SaltStack. Additionally, SaltStack (Salt for short) impressed with its modular architecture and high speed. Where Ansible and Chef's runtime was often unpredictable, SaltStack reliably delivered results after just a few seconds – regardless of whether it served 10 or 100 hosts. For example, load testing of the then-new platform was done with 300 Amazon Web Services (AWS) VMs rolled out by Salt Cloud in a matter of minutes.

SaltStack can be run in many ways: from the classic server-client architecture – master-minion in Salt parlance – to a serverless mode (masterless). SaltStack also demonstrates flexibility when it comes to the question of connectivity. It supports ZeroMQ, SSH, or a plain vanilla TCP mode (e.g., TLS). Proxy modules can also be used to connect systems that do not support SSH or Python. The proxy module then translates the Salt syntax into commands compatible for the target system.

SaltStack enables this versatility by using a modular design from the start. Whether it is a matter of describing desired states, establishing connections, performing actions, or displaying outputs: Everything is modular, interchangeable, and expandable. Many useful default settings are configured, and the admin can simply get started without any serious configuration overhead.

Many an admin never gets to see the depths of the modular building block. If demand increases and you are looking for time-based schedulers or REST APIs, you will find them all in SaltStack's scope of delivery. Whereas other automators have to rely on third-party solutions (or build around other tools), the Salt documentation succinctly explains how these tasks can be solved with just a few lines. The advantage is that the same terminology is always used and configurations are made in a familiar format – simply as another feature from the construction kit you combine and apply.


The usual SaltStack installation [1] comprises the master-minion-ZeroMQ variant. A Salt master is the instance that controls everything, sending the commands and receiving the results. It distributes the state and execution modules and other files to the minions with a built-in file server (saltfs). The Salt minion acts as a client and receives instructions from the master that are processed by executing state and execution modules and delivering information and results to the master.

More complex setups involve multiple master servers running in different zones, each controlling their own minions. These zone masters can in turn be controlled by a higher level master. However, describing such a configuration is beyond the scope of this article.

Assume an instance on a private network that runs a Salt master process. This instance has the hostname salt, which can be resolved by DNS. Two more instances are named minion01 and minion02. A Salt minion is already installed and running on them, and no configurations differ from the default.

When a minion starts, the first thing it does is look for its master. If the configuration does not specify otherwise, then the minion performs a DNS lookup for the name salt and attempts to connect to port 4505 (publisher port) of the address it finds. An unregistered minion then sends its public key to the master and asks to be registered. On the Salt master, you can list these keys and then check that both minions can be reached:

# salt-key list
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    Rejected Keys:
# salt 'minion0*'

With the command

salt-key --accept minion0*

the master accepts the minion01 and minion02 public keys, which are under its control from this point on. If everything goes well, the output acknowledges the minion name with True (i.e., the minion can be reached and controlled).


To trigger an action on a minion, you need to know how to filter and address a specific instance from the minion list. In the default setup, this does not require static or dynamic host lists: The Salt master knows which minions are registered with it, and you can always build complex queries to address them. For example, to see the logrotate configuration file of all Debian hosts with two x86_64 CPUs and 4GB of RAM named minion plus two numbers at the end, you run the command:

# salt -C "G@os:Debian
and G@num_cpus:2
and G@cpuarch:x86_64
and G@mem_total:4096
and E@minion\d{2,}"

The information provided by the minions is referred to as grains in Salt-speak, and they appear with the prefix G@. Grains provide details about network interfaces, RAM, CPUs, the virtualization used, or other similar information. In the example, the master filters the grains by the keys os, num_cpus, cpuarch, and mem_total. Additionally, the minion\d{2,} regular expression introduced by E@ finds all hosts matching the desired naming scheme (i.e. minion01, minion02, etc.). The show_conf command from the logrotate module is then executed on these hosts.

If you regularly use such target filters, you can also store them in a YAML structure and reuse them as a nodegroup, which you call with:

# salt -C "N@group1" logrotate.show_conf

Nodegroups are even allowed to reference each other.

For example, nodegroup.conf could reference group2 and extend group1 with the condition that the init system must be systemd. In turn, the group3 node group could reference group2 and add a query that only finds hosts on the network. Further possibilities and a detailed explanation of the syntax can be found in the Salt documentation; just look for the key word Targeting [2].


Salt typically uses state descriptions (states) to install packages or change configurations. States provide the description of a desired state, which can be an installed package or a file that must exist with certain content and defined permissions on the target system.

These states transparently call the appropriate execution modules (e.g., the aforementioned logrotate module) in the background, which then do the actual work. States are usually defined in YAML format, but they can also be designed variably and dynamically with the Jinja2 template engine. SaltStack also accepts other formats for state descriptions, including JSON, Python, or its own domain-specific language (DSL), PyDSL [3].

A shebang-style line can be used to override the default renderer dynamically, and you can even form renderer chains. The shebang

#! gpg | jinja | yaml

tells SaltStack to render the state first with GNU Privacy Guard (GPG), then with Jinja2, and finally as YAML. The GPG renderer searches the GPG keyring for the private key to match the public key used here; it encrypts and decrypts the string in memory. After that, the Jinja2 and finally the YAML renderer process the document. Basically, the default shebang is #! jinja | yaml.

For a state that installs the iotop package, save the content

Ensure that the iotop package is installed:
    - name: iotop

in the /srv/salt/demo_state.sls file. The command

# salt 'minion01' state.apply demo_state

tells Salt to establish the appropriate state on minion01 [4]. The output will look like Figure 1. To get an output format that can be fed to a shell or Python script, add --out json to the command.

Figure 1: A state.apply call to install a software package.

Usually you will want to apply a whole set of states to one or more hosts. Instead of calling state.apply several times (which is possible), it makes more sense to create a file named /srv/salt/top.sls with the content

      - demo_state

that defines an environment (base), a target (minion01), and a list of states to be applied (demo_state).

In the Salt world, the whole thing is then abstractly known as a highstate that can again be rolled out with state.apply, but without specifying explicit states. The output (Figure 2) is similar to that of the previous command, but with minor changes: The runtime of the highstate is far shorter, the Changes section is blank, and the Comment tells you that the iotop package is already installed.

Figure 2: A highstate job that processes a list of states.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus