Lead Image © lightwise, 123RF.com

Lead Image © lightwise, 123RF.com

Container technology and work organization

Redistribution

Article from ADMIN 33/2016
By
DevOps and container technology often appear together. If you have one of them, you get the other automatically. However, as the following plea for reason shows, it isn't always that simple.

Some people consider container systems like Docker [1] a new form of virtualization – a natural continuation of KVM [2], Xen [3], VMware [4], Hyper-V [5], and the Zones concept in Solaris [6]. For others, containers are successors of the Java Runtime Environment [7].

Some see the new technology as a new packaging format for applications. Containers are thus a bit like an enhanced version of DEB, RPM, or even TAR archives. Another perspective simply views the technology as a set of additional processes and configurations that allow better resource management. If you have had success virtualizing something, you would assume that it would be easy to move on into containerization.

But it doesn't always work that way, and the devil isn't just in the details. Three major issues continue to challenge the move to containers: IT security, operations, and human resources.

Security

A question mark hovers over security in the container world. Typical questions from IT security officers include: Where are the container images stored? How are the images protected against manipulation? How can security vulnerabilities be identified, and how can they be closed? How can sensitive data be secured against spying or unwanted changes? What about multiclient capability? What precautions protect the run-time environment?

The list of questions could go on forever. The search for answers almost always shows that container technology is still rather new. New projects do have an answer to some of the questions; sometimes there are ideas or discussions, but a satisfactory solution is unfortunately often missing.

Conventional virtualization tools let you scan software to find and close security vulnerabilities; you can engage in patch management and secure the run-time environment. If you store images using conventional virtualization, you can rely on tried and tested techniques for secure data management.

However, various industry standards such as PCI DSS (Payment Card Industry – Data Security Standard) [8], SSA-16 (Standards for Attestation Engagements no. 16) [9], or ISO 27001 [10] do not address container technology. Guidelines are missing for securing Docker and other container technologies using the common practices of the IT industry.

The lack of security standards is a challenge for both IT managers and auditors. Often the results are isolated solutions, as makers are forced to reinvent the wheel. The detection of vulnerabilities in container images is a good example. Projects such as Clair [11] and Nautilus [12] have only recently developed the capacity for detecting vulnerabilities in container images (Figure 1).

Figure 1: Projects such as Clair or Nautilus close large gaps in the container world but are still relatively new to the scene.

In the past, anyone who wanted to use Linux containers securely had to come up with their own solution, and once implemented, they had to consider the cost and complications of migrating later to a community solution.

Signing images is a similar example [13]. Signing has only existed since 2015 in the Docker world with the arrival of Notary [14]. This project comes late, if not too late, for some container security experts.

Which format is used? Where should you migrate and when? Does the Open Container Initiative help or just slow things down? The issues of security in the container universe should not be underestimated – particularly in production operations (Figure 2).

Figure 2: No fewer than four projects are competing for supremacy in the container orchestration field.

Day-to-day Operations

The use of container technology in hard, productive everyday life is not necessarily easy – particularly in traditional or growing IT environments. One of the problems is knowing who is even responsible for the container system. Technically speaking, container technology lies between the lower operating system levels and the application layer. Infrastructure teams and system administrators are often confused about who is even in charge of the container configuration.

Admins particularly like to push the contents of the current instance or the underlying image to the application managers. But application managers aren't familiar with tasks such as maintaining and servicing operating system software or the harmonious integration of the underlying infrastructure.

Classic IT operations often require a clear distinction between the different layers or components, but the container world doesn't allow a simple definition. The organization thus needs to be restructured to implement containers successfully. But who thinks about reorganizing the company's administrative structure just to use a new technology? Enthusiasts could throw in the key word "DevOps" at this point.

Commercial support, integration into existing software landscapes, and integration with a service provider are also important considerations, as is the question of the right configuration management tool. Puppet, Ansible, and Chef are the usual suspects.

Even a look at the tools from Docker itself shows that operations have only really been a focus as of recently. Tools such as Trusted Registry (DTR) [15] or Universal Control Plane (DUCP) [16] have only been available since the middle or end of last year, and other tools are still missing.

People Take the Center Stage

The DevOps key word now appears more often in connection with the subject of containers, which reinforces the argument that containers require some organizational restructuring, but this is easier said than done. For example, most IT departments don't start on a greenfield site. Customers often have a significant influence on the structure and operations of existing processes, procedures, and protocols. Therefore, it isn't possible for the service provider to change something in isolation.

Experience shows that it is often even more difficult: You need to continue to serve traditional customers in the usual way and introduce new processes and procedures for other business partners. Creating this balance is a crucial test for the IT Department and might lead to confusion among employees.

The simultaneous existence of traditional and DevOps approaches within a company automatically causes tensions and conflicts. Software such as container technology is not the solution. On the contrary: Containers are often a trigger for more tension.

The network issue is a good example. The traditional approach involves teams or entire departments managing VLANs and firewall rules. Protocols and procedures exist for making amendments and applying enhancements. The applicants are typically system administrators or application consultants. It is all very different in the container world with DevOps. All the task areas are assigned to a single team. The external blessing of being able to make firewall changes through the network group is therefore automatically lost. Turf wars are virtually guaranteed.

The traditional distinction between development and operations is simply thrown in the trash overnight. Administrators need to develop an understanding for developers' needs, and developers need to learn why IT operations are more complicated than they look at first glance.

Consider the example with the network group and the DevOps team. The network group must learn to trust the new type of operations team and its work, which is not a one-way street. The DevOps people shouldn't just stop all communication with the network group while shaking their head if they feel misunderstood. How does it work? Talking and listening help.

The DevOps team can learn from the networkers' knowledge and (possibly decades of) experience – communication certainly won't hurt anyway. In turn, the networking group can benefit from new approaches and methods.

The key to a solution is mutual respect for traditional and brand-new knowledge and the people behind it. It isn't always easy to break the ice. I have had some good experiences with external moderation. External means outside of the project or the subject matter – not necessarily by a different company. The important thing is for both sides to recognize the moderator as a figure of authority.

Practical experience has shown that a pilot project is a good indicator of whether the whole organization can change and continue to develop. It is important that the scope is clearly limited and separated from other everyday business – you could also describe it as a classic lab experiment. In summary, first come the people, and the rest (e.g., software and technology) will take care of itself.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=