Tips and Tricks for Containers

Inspect and extend containers, modify definition files, and create read-only containers for security.

Various aspects of containers make them more useful to you. The following discussion of these topics have no theme; I just look at various properties of containers that I think are worthwhile knowing. In the following sections, I focus on Singularity and Docker.

Making containers read-only can make them resistant to attacks, such as gaining access to a container to add or delete code to create an exploit. Read-only containers also prevent the hacker from modifying the files inside. Although it can prevent attacks, the read-only state limits anyone from doing anything with the container. A read-only designation does not just help with security, it also helps ensure that the containers are the same every time. Because the containers are locked down so they cannot be modified, they execute the same way every time (continuity of execution).

Docker Read-Only Containers

By default, Docker containers are read-write. Because users of the container have elevated privileges, they can modify the container when it is running. However, without saving the container, any container changes are lost once the container is stopped and erased from memory. In a subsequent section in this article I discuss how changes can be saved – also referred to as "extending" the container.

In Docker you can use the --read-only option to disable writing to the container, such as:

$ sudo docker run --gpus all -it --name testing \
  -u $(id -u):$(id -g) -e HOME=$HOME \
  -e USER=$USER -v $HOME:$HOME \
  --rm  --read-only nvidia/cuda:10.1-base-ubuntu18.04

When running a read-only container, it likely will need to access local files (i.e., files in the container, like the filesystems /var, /etc, or /run). The Docker option --tmpfs creates a temporary filesystem in memory that can be used for reading and writing.

For example, if you want to make /var writable, you could use the --tmpfs /var option. You can use as many --tmpfs options as you need on the command line. This option is useful because it does not write a volume back to the Docker host.

If you run the Docker container as read-only, you still might have to have some read-write filesystems (e.g., /tmp and /dev). Fortunately, Docker has an option for creating and using temporary read-write filesystems that you can use at runtime.

An example of a docker run command that creates a writable filesystem for /var and /etc looks like:

$ sudo docker run --gpus all -it --name testing \
  -u $(id -u):$(id -g) -e HOME=$HOME \
  -e USER=$USER -v $HOME:$HOME \
  --rm  --read-only --tmpfs /var \
  --tmpfs /etc nvidia/cuda/10.1-ubuntu:18.04

Notice that both /var and /etc use --tmpfs. These temporary mountpoints use host memory. When the container stops, these filesystems are removed and any files that are in these filesystems are not saved.

Other options you can use with --tmpfs better refines a specific filesystem mount:

$ sudo docker run --gpus all -it --name testing \
  -u $(id -u):$(id -g) -e HOME=$HOME \
  -e USER=$USER -v $HOME:$HOME \
  --rm  --read-only --tmpfs /var \
  --tmpfs /etc nvidia/cuda/10.1-ubuntu:18.04

In this case, /etc is obviously being created read-write (rw), with no set UID (nosuid), no execution (noexec), and a fixed size of 2GB (2g). For more options, check out the tmpfs docs.

A great combination that achieves a true read-only situation is to run Docker containers as read-only while using temporary filesystems.

Singularity Read-Only Containers

By default, Singularity container filesystems are read-only, so you do not have to do anything to attain that state. Pretty simple.

Extending a Container

Sometimes the user of a container will want to add something to it to create a customized container. This addition could be another package or another tool or library – something needed in the container. Note that putting datasets in containers is really not recommended unless they are very small and for the specific purpose of testing or baselining the container application.

Adding applications to containers is generically termed extending them and implies that users can install something into a container and save the container somewhere (e.g., Docker Hub or Singularity Hub).

The next two sections discuss how to extend Docker and Singularity containers – that is, containers not images.

Extending a Docker Container

A Docker container is writable by default. However, when you exit the container, if --rm was used, the container stops and is erased and any changes are lost. However, if you do not use that option, the container is still running on the host on exit. You can take advantage of this situation with a Docker command to save the running container as an image or to commit it as an image to a local repository. The docker commit command run outside of the container commits the container’s changes into a new image.

The overall process of extending a Docker container is not too difficult, as the next example illustrates. The images on my local repository on my desktop are:

$ docker images
REPOSITORY           TAG                     IMAGE ID            CREATED             SIZE
nvidia/cuda          10.1-base-ubuntu18.04   3b55548ae91f        4 months ago        106MB
hello-world          latest                  fce289e99eb9        16 months ago       1.84kB

Running the nvidia/cuda image without the --rm option,

$ docker run --gpus all -ti nvidia/cuda:10.1-base-ubuntu18.04
root@c31656cbd380:/#

makes it possible to extend the container. Recall that this option automatically removes the container when it exits. However, it does not remove the image on which it is based. If you did use this option, on exiting the container, all of your changes would disappear.

At this point, you can install whatever you want in the running container. An example I use in explanation is installing Octave or Scilab into the container. A word of caution: Before using the appropriate package manager to install anything, synchronize and update the package repository. (I speak from experience.) Below is the abbreviated output for updating the repository for Ubuntu (the container operating system):

root@c31656cbd380:/# apt-get update
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
...
Fetched 18.0 MB in 9s (1960 kB/s)

After the package repositories are synced, I can install Octave:

# apt-get install octave
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
...
 
done.
done.
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.36.11-2) ...

Now, I'll make sure Octave is installed:

root@c31656cbd380:/# octave
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features
GNU Octave, version 4.2.2
Copyright (C) 2018 John W. Eaton and others.
This is free software; see the source code for copying conditions.
There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.  For details, type 'warranty'.
 
Octave was configured for "x86_64-pc-linux-gnu".
 
Additional information about Octave is available at http://www.octave.org.
 
Please contribute if you find this software useful.
For more information, visit http://www.octave.org/get-involved.html
 
Read http://www.octave.org/bugs.html to learn how to submit bug reports.
For information about changes from previous versions, type 'news'.

Finally, exit from the container.

Although I have exited the container, it is still running, as you can see with docker ps -a:

$ docker ps -a
CONTAINER ID        IMAGE                               COMMAND             CREATED             STATUS                      PORTS   NAMES
c31656cbd380        nvidia/cuda:10.1-base-ubuntu18.04   "/bin/bash"         11 minutes ago      Exited (0) 39 seconds ago           my_image

The next step is to use docker commit and the container ID to save the running container to a new image. You also need to specify the name of the new image:

$ docker commit c31656cbd380 cuda:10.1-base-ubuntu19.04-octave
sha256:b01ee7a9eb2d4e29b9b6b6e8e3664442813f858d14307a09263f3322f3e5732e

The container ID corresponds to the running container you want to put into the Docker repository – the local repository. After saving it locally, you might want to push it to a more permanent repository to which you have access.

To make sure the new image is where you want it, use docker images:

$ docker images
REPOSITORY           TAG                            IMAGE ID            CREATED             SIZE
cuda                 10.1-base-ubuntu19.04-octave   b01ee7a9eb2d        47 seconds ago      873MB
nvidia/cuda          10.1-base-ubuntu18.04          3b55548ae91f        4 months ago        106MB
hello-world          latest                         fce289e99eb9        16 months ago       1.84kB

Extending a Singularity Container

As mentioned earlier, by default, Singularity containers are read-only and immutable because it uses SquashFS, a read-only filesystem. When Singularity runs a container, a few filesystems that likely need read-write access are mounted read-write from the host into the container:

  • $HOME
  • /tmp /proc
  • /sys
  • /dev
  • $PWD

Being read-only creates some issues for extending an immutable Singularity image. However, you can extend an image with a brute force process that modifies the original container definition file and builds a new image. What do you do if you do not have the definition file? Fortunately, when a Singularity image is created, the definition file is embedded in the image, and you have an option to inspect the container and list this definition file, allowing you to edit it to create your new image.

For example, begin by creating a Singularity image from a Docker image:

$ singularity build cuda_10_1-base-ubuntu18_04.simg docker://nvidia/cuda:10.1-base-ubuntu18.04
INFO:    Starting build...
Getting image source signatures
Copying blob 7ddbc47eeb70 done
Copying blob c1bbdc448b72 done
Copying blob 8c3b70e39044 done
Copying blob 45d437916d57 done
Copying blob d8f1569ddae6 done
Copying blob 85386706b020 done
Copying blob ee9b457b77d0 done
Copying config a6188358e1 done
Writing manifest to image destination
Storing signatures
2020/05/02 07:47:53  info unpack layer: sha256:7ddbc47eeb70dc7f08e410a6667948b87ff3883024eb41478b44ef9a81bf400c
2020/05/02 07:47:54  info unpack layer: sha256:c1bbdc448b7263673926b8fe2e88491e5083a8b4b06ddfabf311f2fc5f27e2ff
2020/05/02 07:47:54  info unpack layer: sha256:8c3b70e3904492c753652606df4726430426f42ea56e06ea924d6fea7ae162a1
2020/05/02 07:47:54  info unpack layer: sha256:45d437916d5781043432f2d72608049dcf74ddbd27daa01a25fa63c8f1b9adc4
2020/05/02 07:47:54  info unpack layer: sha256:d8f1569ddae616589c5a2dabf668fadd250ee9d89253ef16f0cb0c8a9459b322
2020/05/02 07:47:54  info unpack layer: sha256:85386706b02069c58ffaea9de66c360f9d59890e56f58485d05c1a532ca30db1
2020/05/02 07:47:54  info unpack layer: sha256:ee9b457b77d047ff322858e2de025e266ff5908aec569560e77e2e4451fc23f4
INFO:    Creating SIF file...
INFO:    Build complete: cuda_10_1-base-ubuntu18_04.simg

Going forward, the image cuda_10_1-base-ubuntu18_04.simg will be used in this example. You can inspect the image for its definition file with the -d option:

$ singularity inspect -d cuda_10_1-base-ubuntu18_04.simg
bootstrap: docker
from: nvidia/cuda:10.1-base-ubuntu18.04

Because the starting point was a Docker image, the definition file is very simple. The point is that every Singularity container has a definition file embedded in the image, and it can be extracted with a simple command, which allows anyone to reconstruct the image.

The process for extending a Singularity image is simply to take the embedded definition file, modify it to add the needed libraries or tools, and rebuild the image. Simple.

As an example, start with the extracted definition file in the example and make sure Octave is not already installed:

$ singularity shell cuda_10_1-base-ubuntu18_04.simg
Singularity> octave
bash: octave: command not found
Singularity> exit
exit

Now, take the definition file and modify it to install Octave:

BootStrap: docker
From: nvidia/cuda:10.1-base-ubuntu18.04
%post
    . /.singularity.d/env/10-docker*.sh
 
%post
    cd /
    apt-get update
 
%post
    cd /
    apt-get install -y octave

With this updated definition file, you can create a new image. In this case, just name it test.simg. After it is created, shell into it and try the octave command:

$ singularity shell test.simg
Singularity> octave
 
(process:13030): Gtk-WARNING **: 11:04:09.607: Locale not supported by C library.
Using the fallback 'C' locale.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Singularity> exit
exit

The GUI pops up on the screen, so you know it is installed. With no direct command to extend a Singularity image, you have to get the definition file from the existing image, update it, and create a new image.

As I have mentioned in past container articles, HPCCM is a great, easy-to-use tool for creating Dockerfiles or Singularity definition files because it contains many building blocks for common HPC components, such as Open MPI or the GCC or PGI toolchains. HPCCM recipes are writen in Python and are usually very short.

HPCCM makes creating Dockerfiles or Singularity definition files very easy; therefore, I tend to use it almost exclusively. However, I would like to store the recipe in the image just as Singularity stores its definition file. Fortunately, Singularity allows you to add metadata to your image in a %label section of the definition. After HPCCM creates the Singularity file, you can then just add a %label section that contains the recipe.

Inspecting a Docker Container

Containers are relatively new in computing, so when you pull or download container images, it might be a good idea to inspect them as best you can before using them to create a container. Moreover, inspecting a container to learn something from it that you can use in your own containers is a great way to move forward.

Docker has a couple of commands that can be useful in inspecting or learning about the container. The first straightforward command is docker inspect <image> (note that the output has been abbreviated):

$ docker inspect nvidia/cuda:10.1-base-ubuntu18.04
[
    {
        "Id": "sha256:3b55548ae91f1928ae7315b9fe43b3ffa097a3da68f4be86d3481e857241acbb",
        "RepoTags": [
            "nvidia/cuda:10.1-base-ubuntu18.04"
        ],
        "RepoDigests": [
            "nvidia/cuda@sha256:3cb86d1437161ef6998c4a681f2ca4150368946cc8e09c5e5178e3598110539f"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2019-11-27T20:00:08.137590731Z",
        "Container": "f8cdd4d69d0b5123a712b66cd12a46799daff6e23896e73c6bfd247a981daa71",
        "ContainerConfig": {
            "Hostname": "f8cdd4d69d0b",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
 
...
 
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:cc967c529ced563b7746b663d98248bc571afdb3c012019d7f54d6c092793b8b",
                "sha256:2c6ac8e5063e35e91ab79dfb7330c6154b82f3a7e4724fb1b4475c0a95dfdd33",
                "sha256:6c01b5a53aac53c66f02ea711295c7586061cbe083b110d54dafbeb6cf7636bf",
                "sha256:e0b3afb09dc386786d49d6443bdfb20bc74d77dcf68e152db7e5bb36b1cca638",
                "sha256:37b9a4b2218692d028f9f26aa9cb85bf1f56d9abe612ba31304643bdb448484f",
                "sha256:b16af11cbf2977eb52ba4d6cee5b713721cc19812b8c90ea1f22e7e7641301fa",
                "sha256:808fd332a58a1cc1ecda89295c2d9ef8e594674e476bc5eb25e99374515a1c7d"
            ]
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]

A second option useful in inspecting a docker image is docker history:

$ docker history nvidia/cuda:10.1-base-ubuntu18.04
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
3b55548ae91f        5 months ago        /bin/sh -c #(nop)  ENV NVIDIA_REQUIRE_CUDA=c...   0B
           5 months ago        /bin/sh -c #(nop)  ENV NVIDIA_DRIVER_CAPABIL...   0B
           5 months ago        /bin/sh -c #(nop)  ENV NVIDIA_VISIBLE_DEVICE...   0B
           5 months ago        /bin/sh -c #(nop)  ENV LD_LIBRARY_PATH=/usr/...   0B
           5 months ago        /bin/sh -c #(nop)  ENV PATH=/usr/local/nvidi...   0B
           5 months ago        /bin/sh -c echo "/usr/local/nvidia/lib" >> /...   46B
           5 months ago        /bin/sh -c apt-get update && apt-get install...   25.1MB
           5 months ago        /bin/sh -c #(nop)  ENV CUDA_PKG_VERSION=10-1...   0B
           5 months ago        /bin/sh -c #(nop)  ENV CUDA_VERSION=10.1.243    0B
           5 months ago        /bin/sh -c apt-get update && apt-get install...   16.5MB
           5 months ago        /bin/sh -c #(nop)  LABEL maintainer=NVIDIA C...   0B
           6 months ago        /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B
           6 months ago        /bin/sh -c mkdir -p /run/systemd && echo 'do...   7B
           6 months ago        /bin/sh -c set -xe   && echo '#!/bin/sh' > /...   745B
           6 months ago        /bin/sh -c [ -z "$(apt-get indextargets)" ]     987kB
           6 months ago        /bin/sh -c #(nop) ADD file:a48a5dc1b9dbfc632...   63.2MB

By using the docker history command, you can almost reverse engineer a Dockerfile from an existing container:

$ docker history --format "{{.CreatedBy}}" --no-trunc nvidia/cuda:10.1-base-ubuntu18.04 | tac
/bin/sh -c #(nop) ADD file:a48a5dc1b9dbfc632f6cf86fe27b770b63f07a115c98c4465dc184e303a4efa1 in /
/bin/sh -c [ -z "$(apt-get indextargets)" ]
...
/bin/sh -c #(nop)  ENV CUDA_VERSION=10.1.243
/bin/sh -c #(nop)  ENV CUDA_PKG_VERSION=10-1=10.1.243-1
/bin/sh -c apt-get update && apt-get install -y --no-install-recommends         cuda-cudart-$CUDA_PKG_VERSION cuda-compat-10-1 && ln -s cuda-10.1 /usr/local/cuda &&     rm -rf /var/lib/apt/lists/*
/bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf &&     echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
/bin/sh -c #(nop)  ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
/bin/sh -c #(nop)  ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
/bin/sh -c #(nop)  ENV NVIDIA_VISIBLE_DEVICES=all
/bin/sh -c #(nop)  ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
/bin/sh -c #(nop)  ENV NVIDIA_REQUIRE_CUDA=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411

Inspecting a Singularity Container

As previously mentioned, you can inspect a Singularity image for the definition file. The same command with other options extracts more information. For example:

$ singularity inspect -l -r -d -e -t cuda_10_1-base-ubuntu18_04.simg
WARNING: No SIF metadata partition, searching in container...
bootstrap: docker
from: nvidia/cuda:10.1-base-ubuntu18.04
 
 
#!/bin/sh
OCI_ENTRYPOINT=''
OCI_CMD='"/bin/bash"'
CMDLINE_ARGS=""
# prepare command line arguments for evaluation
for arg in "$@"; do
    CMDLINE_ARGS="${CMDLINE_ARGS} \"$arg\""
done
 
 
# ENTRYPOINT only - run entrypoint plus args
if [ -z "$OCI_CMD" ] && [ -n "$OCI_ENTRYPOINT" ]; then
    if [ $# -gt 0 ]; then
        SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}"
    else
        SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT}"
    fi
fi
 
# CMD only - run CMD or override with args
if [ -n "$OCI_CMD" ] && [ -z "$OCI_ENTRYPOINT" ]; then
    if [ $# -gt 0 ]; then
        SINGULARITY_OCI_RUN="${CMDLINE_ARGS}"
    else
        SINGULARITY_OCI_RUN="${OCI_CMD}"
    fi
fi
 
# ENTRYPOINT and CMD - run ENTRYPOINT with CMD as default args
# override with user provided args
if [ $# -gt 0 ]; then
    SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}"
else
    SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${OCI_CMD}"
fi
 
# Evaluate shell expressions first and set arguments accordingly,
# then execute final command as first container process
eval "set ${SINGULARITY_OCI_RUN}"
exec "$@"
 
 
#!/bin/sh
# Custom environment shell code should follow
 
 
org.label-schema.build-date: Saturday_2_May_2020_10:52:41_EDT
org.label-schema.schema-version: 1.0
org.label-schema.usage.singularity.deffile.bootstrap: docker
org.label-schema.usage.singularity.deffile.from: nvidia/cuda:10.1-base-ubuntu18.04
org.label-schema.usage.singularity.version: 3.5.3

The various options applied to the original Singularity image (the one without Octave) are:

  • -d: show the image definition file
  • -e: show the environment settings for the image
  • -l: show the labels for the image
  • -t: show the test script for the image

Summary

Containers are evolving and being used more and more. As you experiment, learn, and use containers, I hope these odds and ends, or tips and tricks, will help you take advantage of your containers.

Tags: container container , HPC HPC

Powered by eZ Publish™ CMS Open Source Web Content Management. Copyright © 1999-2014 eZ Systems AS (except where otherwise noted). All rights reserved.