Lead Image © Ping Han, 123RF.com

Lead Image © Ping Han, 123RF.com

Opportunities and risks: Containers for DevOps

Logistics

Article from ADMIN 33/2016
By
Containers are an essential ingredient for various DevOps concepts, but used incorrectly, they do more harm than good.

Like the term "cloud," the phrase "DevOps" is used so widely these days that people can no longer agree on a single definition. A common definition states that DevOps is a cooperation between developers and operations professionals that results in the optimization of IT and development processes and automatically leads to a better coordinated delivery of software and appropriate infrastructure. Most definitions no longer lay down specific requirements regarding the tools or programs used.

Containers have become widely accepted, and the meteoric rise of Docker has fixed the technology in the minds of developers, administrators, and planners alike. On the one hand, containers quickly provide developers a clean environment in which to experiment freely. On the other hand, containers significantly reduce the overhead required for deployment. You would forgiven if you think a container is the technical implementation of the DevOps principle.

However, anyone dealing with containers could be taking a big risk in everyday operations, because a black box (Figure 1) in the form of a container is a frightening scenario for administrators.

Figure 1: The community area of the Docker Hub includes many images; often, their production is incomprehensible, which can become a problem (e.g., when updating).

In this article, I explain the potential problems of containers and present alternatives, because if you use containers correctly, you can profit from their benefits and avoid the pitfalls.

Inventory

A look at the initial state when using Docker will help in understanding the problems associated with containers: Developers work on containers and in the end have a finished product in which the desired application runs smoothly. They usually start their work with a finished image: Anyone wanting, for example, to develop a web application based on Ubuntu 16.04, should start with a container in which Ubuntu and Apache are already installed before adding their own application. If the application is running after making a few adjustments to the container, the developer can create an image from it again and give it to the administrator.

The administrator is responsible for the IT infrastructure operations: This individual loads the container onto a Docker host and puts it into operation. Immediately afterward, the container service is available on the network; the developer and administrator can give themselves a pat on the back (if they're not the same person) and remove it from their to-do list. It only becomes apparent later that the container could be a problem.

Update Problems

The problem becomes evident when software needs to be brought up to date, usually involving a security update. Loads of examples from the past, such as the various errors in the SSL standard library and the problem in the C library resolver (libc), illustrate this problem.

When push comes to shove, the admin has to update multiple systems at the same time. The solution is simple for physical machines: As soon as the distributor provides a suitable update, the system's update function or an automated solution installs the appropriate packages.

The process becomes more complicated with Docker-style containers. Admins can hardly say anything about their update capabilities because they are often based on a third-party provider's basic image. The best case scenario is for the container to have automatic updates enabled and be built in line with common standards so that updates work. Moreover, an application that uses its own libraries within a container, such as a locally installed C library, can be updated directly in the container. Alternatively, the developer could also rebuild the container so that it contains the necessary security fix from the beginning. To do this, however, you need to make sure the developer can recreate the container with the application again. Depending on the number of rolled-out containers, this would also need to be applied to many different containers at the same time; therefore, anyone using hundreds or thousands of containers is faced with a mammoth project.

Development vs. Operations

Containers like the example cited here illustrate the most important difference between development and operations. Development usually involves the task of improving existing applications, including adding new features or adapting the container to different conditions. Administration, on the other hand, concerns itself with stability, security, and maintainability.

For example, if an existing component of a container application provides new features, the developer will want to use them in the application. However, it isn't the developer's job to worry about how the application operates. Thus, it is usually enough that a new feature performs within the development environment as desired.

On the other hand, admins have completely different interests with regard to the operation of a platform. New features might be nice, but the new version of the software must operate within the framework of the existing container. Admittedly it is often rather cumbersome to roll out an application so that it fits into the existing infrastructure. Although deploying the original development environment as a container is a temptation and seems feasible at first, for the reasons discussed here, admins should resist this temptation.

Companies don't need to forgo containers completely, though, because they certainly provide benefits. The dilemma isn't caused by the use of containers but by the problematic way in which they are used, and administrators have ways to employ containers without the concomitant frustration.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • LXC 1.0
    LXC 1.0, released in early 2014, was the first stable version for managing Linux containers. We check out the lightweight container solution to see whether it is now ready for production.
  • Deis combines Docker and CoreOS
    Deis combines Docker and CoreOS to create a platform-as-a-service tool, and the developers say version 1.0 is ready for production.
  • Manage Linux containers with Docker
    The Docker software can pack and run any application as a lightweight container – including web applications.
  • Relational databases as containers
    If you spend very much of your time pushing containerized services from server to server, you might be asking yourself: Why not databases, as well? We describe the status quo for RDBMS containers.
  • Container Virtualization Comeback with Docker
    Docker helps the Linux container achieve an appealing comeback and integrates some features missing from earlier container solutions.
comments powered by Disqus

SysAdmin Day 2017!

  • Happy SysAdmin Day 2017!

    Download a free gift to celebrate SysAdmin Day, a special day dedicated to system administrators around the world. The Linux Professional Institute (LPI) and Linux New Media are partnering to provide a free digital special edition for the tireless and dedicated professionals who keep the networks running: “10 Terrific Tools."

Special Edition

Newsletter

Subscribe to ADMIN Update for IT news and technical tips.

ADMIN Magazine on Twitter

Follow us on twitter