Kubernetes for small and medium-sized enterprises

Advantage Small

Automation Is Key

For SMEs, moving workloads to containers can even be a stepping stone toward automation. As I mentioned earlier, many admins overestimate the level of automation in their own environments for a number of reasons. Sometimes companies assume (usually wrongly) that automation is not worthwhile because many tasks occur only once. Sometimes, companies are so busy with manual work that they no longer have the resources to look at their automation options, and proudly announced automation projects come to nothing because it turns out that automating legacy systems is far more complicated in practice than it seems to be in theory.

Operating system containers offer an escape route. If your provider can give you a preconfigured container image, you can save yourself much of the application management work your own system involves. Tools such as Ansible make it relatively easy to store a configuration file in the right place in the operating system and then start the container with the correct parameters. Therefore, containers also make implementing automation far easier.

Pared-down operating systems such as CoreOS also have the advantage that automating them is a one-off event that can be repeated as often as you need afterward. Once you have moved the bulk of your applications from bare metal to containers, you will no longer see much of a challenge in creating an AutoYaST or Kickstart configuration for your local environment. As a neat side effect, this move to containers also lets you establish bare metal lifecycle management for the remainder, eliminating manual work for all time and ensuring greater efficiency. This entire workflow operates largely without the constraint of continuous integration and continuous delivery (deployment) (CI/CD) systems, which hover over many a container setup like the Sword of Damocles – and usually without any good reason.

Interim Conclusions

Moving applications into containers clearly delivers added value, especially with a view to maintainability and automation and without the principle of cloud-native applications or Kubernetes even having to be part of the equation.

If you regularly spend long hours performing the same manual tasks in your organization, containers in combination with an automator such as Ansible are a good start on the road to improved maintainability and significantly more automation. Lean Linux distributions further reduce the maintenance overhead and offer the opportunity to establish comprehensive bare-metal lifecycle management.

Naturally, this effect is far less pronounced if the application in question is not available as a preconfigured container from your provider. Even if this is the case, though, the work involved with bundling the application into a container and running it yourself still pays dividends in most cases. If you use Docker Hub, make sure that you verify the contents of the containers you use.

What About Kubernetes?

The culmination of the narrative created up to this point is that Kubernetes also has a reason to exist in smaller environments. Assuming you have invested hours of work in packaging a conventional setup in containers, you will soon realize that running containers, much like running classic applications, involves a few challenges. For example, anyone running a database needs high availability (HA). Practically nothing has changed in this respect in the past 20 years. What has changed, however, are the tools that are available to the admin to help implement this kind of setup, which can be easily demonstrated with a simple comparison.

Until the advent of cloud computing and scalable environments, database setups regularly comprised multiple components. The underpinnings were servers, between which the admin created a connection with a cluster manager – usually Pacemaker (Figure 3). A shared storage solution such as a distributed replicated block device (DRBD) took care of making the data available on one node or the other. Together with a virtual IP address, the database then migrated as a resource in Pacemaker from node A to node B, or vice versa, depending on which node had just died.

Figure 3: A Pacemaker setup from the good old days but for running virtual machines rather than a database, for example. Components like Pacemaker can easily be replaced by Kubernetes when it comes to running specific services.

Setups of this type, however, are not much fun. Even though Pacemaker has become more user friendly in recent years, the software is still highly complex. Most admins will want to steer clear of this kind of setup if they find a way to do so.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=