Cloud-native storage for Kubernetes with Rook

Memory

Simpler Start

The basic requirement before you can try Rook is a running Kubernetes cluster. Rook does not place particularly high demands on the cluster. The configuration only needs to support the ability to create local volumes on the individual cluster nodes with the existing Kubernetes volume manager. If this is not the case on all machines, Rook's pod definitions let you specify explicitly which machines of the solution are allowed to take storage and which are not.

To make it as easy as possible, the Rook developers have come up with some ideas. Rook itself comes in the form of Kubernetes pods. You can find example files on GitHub [6] that start these pods. The operator namespace contains all the components required for Rook to control Ceph. The cluster namespace starts the pods that run the Ceph components themselves.

Remember that for a Ceph cluster to work, it needs at least the monitoring servers (MONs) and its data silos, the object storage daemons (OSDs). In Ceph, the monitoring servers take care of both enforcing a quorum and ensuring that clients know how to reach the cluster by maintaining two central lists: The MON map lists all existing monitoring servers, and the OSD map lists the available storage devices.

However, the MONs do not act as proxy servers. Clients always need to talk to a MON when they first connect to a Ceph cluster, but as soon as they have a local copy of the MON map and the OSD map, they talk directly to the OSDs and also to other MON servers.

Ceph, as controlled by Rook, makes no exceptions to these rules. Accordingly, the cluster namespace from the Rook example also starts corresponding pods that act as MONs and OSDs. If you run the

kubectl get pods -n rook

command after starting the namespaces, you can see this immediately. At least three pods will be running with MON servers, as well as various pods with OSDs. Additionally, the rook-api pod, which is of fundamental importance for Rook itself, handles communication with the other Kubernetes APIs.

At the end of the day, a new volume type is available in Kubernetes after the Rook rollout. The volume points to the different Ceph front ends and can be used by users in their pod definitions like any other volume type.

Complicated Technology

Rook does far more work in the background than you might think. A good example of this is integration into the Kubernetes Volumes system. Because Ceph running in Kubernetes is great, but also useless if the other pods can't use the volumes created there, the Rook developers tackled the problem and wrote their own volume driver for use on the target systems. The driver complies with the Kubernetes FlexVolume guidelines.

Additionally, a Rook agent runs on every kubelet node and handles communication with the Ceph cluster. If a RADOS Block Device (RBD) originating from Ceph needs to be connected to a pod on a target system, the agent ensures that the volume is also available to the target container by calling the appropriate commands on that system.

The Full Monty

Ceph currently supports three types of access. The most common variant is to expose Ceph block devices, which can then be integrated into the local system by the rbd kernel module. Also, the Ceph Object Gateway or RADOS Gateway (Figure 2) enables an interface to Ceph on the basis of RESTful Swift and S3 protocols. For some months now, CephFS has finally been approved for production; that is, a front end that offers a distributed, POSIX-compatible filesystem with Ceph as its back-end storage.

Figure 2: The RADOS Gateway supports S3-based access to Ceph, as does Rook, in addition to CephFS and RBD.

From an admin point of view, it would probably already have been very useful if Rook were only able to use one of the three front ends adequately: the one for block devices. However, the Rook developers did not want to skimp; instead, they have gone whole hog and integrated support into their project for all three front ends.

If a container wants to use persistent storage from Ceph, you can either create a real Docker volume using a volume directive, organize access data for the RADOS Gateway for RESTful access, or integrate CephFS locally. The functional range of Rook is quite impressive.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Getting Ready for the New Ceph Object Store

    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph v10.2.x, Jewel.

  • Ceph object store innovations
    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph c10.2.x, Jewel.
  • Ceph and OpenStack Join Forces

    When building cloud environments, you need more than just a scalable infrastructure; you also need a high-performance storage component. We look at Ceph, a distributed object store and filesystem that pairs well in the cloud with OpenStack.

  • Scalable mail storage with Dovecot and Amazon S3
    Admins can solve typical mail server scalability problems with Dovecot’s Amazon S3 plugin and the Ceph distributed object store.
  • Comparing Ceph and GlusterFS
    Many shared storage solutions are currently vying for users’ favor; however, Ceph and GlusterFS generate the most press. We compare the two competitors and reveal the strengths and weaknesses of each solution.
comments powered by Disqus