Five Kubernetes alternatives

Something Else


Nomad [3], developed by HashiCorp, takes an approach fundamentally different from Docker Swarm and Kubernetes, even though the product's objectives sound familiar. Right at the top of its website (Figure 3), Nomad states that it seeks to be a tool that effectively manages "clusters of physical machines and the applications running on them." As usual in the container context, it is useful to translate the term "application" to mean an application of any kind, comprising several microcomponents that have to be rolled out together in a predefined manner and in a specific configuration.

Figure 3: Nomad by HashiCorp also is a job manager for virtualization functions.

Whereas Kubernetes and Swarm employ the container format for this roll-out (i.e., Docker or Rocket), Nomad already starts to differentiate itself from its competitors: In addition to containers, it supports virtualized hardware (i.e., classic virtual machines (VMs)) and standalone applications (i.e., applications that can be used directly without additional components).

Amazingly, Nomad promises a feature scope that can keep pace with Kubernetes and Docker Swarm and comes as a single binary that can be rolled out on any number of hosts without noticeable problems. Nomad does not need any external tools, either. HashiCorp apparently takes advantage of the tools already built for clusters in the form of Serf and Consul, also from HashiCorp (e.g., as a Consul-specific consensus algorithm). The company's experience thus flows into Nomad.

In fact, a Nomad setup is very simple. Although Kubernetes and Swarm also have tools that roll out a complete cluster in a short time, the admin still has to deal with many components. Nomad, on the other hand, is a single binary that bundles all relevant functions.

Managing Microservices

According to HashiCorp, Nomad has one goal above all: Provide a tool to the admin to manage the workload of thousands of small applications well and efficiently across the node boundaries within a cluster.

Very important is that Nomad have no special requirements for the underlying infrastructure: Whether you run Nomad on your own bare metal or in your own data center is irrelevant to the program. Hybrid setups are also no problem, as long as you ensure that the network connection between the Nomad nodes works.

In launching an application, the nomad call defines all the relevant details, such as the driver to be used (e.g., Docker) or the application-specific data. You have a choice between HashiCorp's own HCL format or JSON, which is, however, less easy to read and write.

At the end, you pass the job description directly into Nomad, which now takes care of processing the corresponding jobs in the cluster by starting any required containers.

It might be a bit too optimistic to expect to be able to roll out different services within the cluster that need to access each other. The problem is not new in itself: For example, if you want to run one database and many web server setups that access the same database in your cluster, the web servers need to know how to reach the database. However, you cannot configure this statically, because it is when a job is processed by Nomad that the decision is made as to where the database runs.

In this case, Nomad allows you to join forces with Consul: The finished job creates a corresponding service definition in Consul, which you then reference in the web server application.

Drivers and Web UI

Nomad defines a driver as any interface through which it starts its jobs. It has a driver for Docker, one that connects LXC, and a third one that knows how to use Qemu and KVM. As you will quickly notice, Nomad really only sees itself as a scheduling tool – unlike Kubernetes and Docker. For its applications, it relies 100 percent on the functions offered by the drivers. Nomad does not offer its own network or storage functions, all of which come from the docked partner (e.g., Docker).

As you would expect, Nomad offers a RESTful API into which you can dump jobs directly. The API also forms the back end for a graphical user interface, which is included in Nomad's open source version. Additionally, Nomad understands how to deal with regions out the box, so there's nothing to prevent multisite solutions.

When it comes to scalability, Nomad doesn't need to shy away from comparisons with big competitors: According to the developers, Nomad setups with more than 10,000 nodes are in live operation today, and they work reliably. Given that Nomad implies a significantly lower overhead than Kubernetes and Docker Swarm, this statement is credible, even without having tested it myself.

Nomad is not only available as open source. HashiCorp also offers Pro and Premium versions of the software, which are explicitly aimed at customers in the enterprise environment. In addition to the functions of the open source variant, the Pro version offers namespaces that allow a Nomad cluster to be split logically. Autopilot, which takes care of rolling upgrades in the open source version, is available in an advanced version in Pro; it updates all the servers on the fly without downtime. Silver Support (nine hours a day/five days a week with a corresponding service-level agreement (SLA)) is also included.

If you choose the Premium version, you receive additional resource quotas and a policy framework based on Sentinal, along with Gold Support (24/7 with an SLA). The HashiCorp website does not advertise any prices, however if you request a Pro or Premium demo, price information is delivered.

All told, you can say that Nomad is a light-footed Kubernetes alternative that is completely sufficient for the use cases of many companies, so if you want to run microapplications in containers, you could do worse than taking a good look at Nomad in advance.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Containers made simple
    The Portainer graphical management interface makes it easy to deploy containers, relieving you of huge amounts of routine work you would normally have to handle with Docker, Podman, or Kubernetes. However, the licensing structure leaves something to be desired.
  • Cloud-native application bundles for easy container deployment
    Cloud-native application bundles are an easy option for distributing applications in a microservice architecture.
  • Nested Kubernetes with Loft
    Kubernetes has limited support for multitenancy, so many admins prefer to build multiple standalone Kubernetes clusters that eat up resources and complicate management. As a solution, Loft launches any number of clusters within the same control plane.
  • Rancher Kubernetes management platform
    Rancher has set up shop as an agile alternative to Red Hat OpenShift as an efficient way to manage Kubernetes clusters. In terms of the architecture, a Rancher setup differs significantly from classic Kubernetes.
  • Rancher manages lean Kubernetes workloads
    The Rancher lightweight alternative to Red Hat's OpenShift gives admins a helping hand when entering the world of Kubernetes, but with major differences in architecture.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=