Lead Image © sdecoret, 123RF.com

Lead Image © sdecoret, 123RF.com

Storage cluster management with LINSTOR

Digital Warehouse

Article from ADMIN 58/2020
By
LINSTOR is a toolkit for automated cluster management that takes the complexity out of DRBD management and offers a wide range of functions, including provisioning and snapshots.

LINSTOR open source software manages block storage in large Linux clusters and simplifies the deployment of high availability with distributed replicated block device version 9 (DRBD 9), dynamically provisioning storage and simplifying storage management for Kubernetes, Docker, OpenStack, OpenNebula, and OpenShift through integration. Alternatively, the software can take care of high availability with DRBD exclusively. In this article, I look into the setup and operation of the tool.

At first glance, storage automation appears to be a simple problem: It's been a long while since the tasks of creating, enlarging, and deleting storage volumes were subject to the restrictions imposed by the size of individual storage devices or the size and position of the slices or partitions created on them. Technologies such as the Logical Volume Manager (LVM) or the Zetabyte filesystem (ZFS) have long made it possible to manage storage pools in which almost any number and any size of storage volume can be created or deleted with equal ease.

At first glance, it might seem easy enough to automate these manual tasks with a few scripts. However, automation requires attention to tiny details and holds many potential pitfalls for the admin, as well. What is not noticeable in manual administration can quickly lead to problems in automation – for example, when device files do not yet exist in the /dev filesystem, even though the respective command for creating a volume has already reported completion, or if you want to delete volumes that are no longer mounted but are in use by the udev subsystem. If the requirement to manage an entire cluster centrally, and not just a single system, is added, it is clear that a few rough-and-ready scripts are not up to the task.

Three Components

In these cases, LINSTOR [1] from LINBIT enters into play. It is an open source software bundle comprising several components for storage cluster management. It supports not only the creation, scaling, and deletion of individual simple volumes on one of the storage cluster nodes, but also the replication of individual volumes or the replication of consistency groups of several volumes between several cluster nodes. This process is handled by DRBD replication technology. You can also use LINSTOR to configure additional functionality for volumes, such as encryption or data deduplication. Especially with more complex storage technologies like DRBD, the software automatically manages various system resources, including the necessary TCP/IP port numbers for the replication links or the "minor numbers" for the device files on the /dev filesystem. Calculating the storage space required for the DRBD metadata is also automated.

LINSTOR has essentially three components:

  1. The LINSTOR controller manages the configuration of the entire cluster and must be executable on at least one node of the cluster. For reasons of high availability, the controller is normally installed on several nodes.
  2. The LINSTOR satellite runs on each node of the storage cluster, where it performs the steps required to manage the storage space automatically.
  3. LINSTOR clients complete the package. On the one hand, this could simply be the command-line application, known as the LINSTOR client, which can be used to operate LINSTOR both manually and by scripting. On the other hand, it could be a driver that provides integration with the storage management of other products, such as virtualization environments like OpenStack, OpenNebula, or Proxmox, or container technologies like Kubernetes.

All of these components are network transparent, which means, for example, that a client does not run on the same node as the controller with which that client communicates.

Installation

To install LINSTOR, you need at least version 8 of the Java Runtime Environment (JRE), which is normally installed as a dependency when installing the distribution packages. Whether this is OpenJDK, Oracle HotSpot VM, or IBM Java VM is not important.

The package is usually installed in the directory trees for the respective type of file or directory, so that configuration data ends up in /etc, variable data like logs and reports in /var/log, and program libraries in /usr/share/linstor-server/lib. With command-line parameters or entries in the configuration file, LINSTOR also offers the option of installing the components in other paths.

By default, the LINSTOR controller relies on an integrated database, which is automatically initialized the first time the controller is started. For larger installations, you can also configure the controller to use a central SQL database, such as PostgreSQL, MariaDB, or DB2.

After installing from the distribution packages, start the modules installed on the respective node with systemd:

systemctl start linstor controller
systemctl start linstor satellite

Here, I kept the default LINSTOR configuration settings for the network communication of the individual components, the resource pools to be used, TCP/IP port ranges, minor number ranges, DRBD peer slots, and the like. Also, I will not be using SSL-encrypted connections, because the setup steps required to customize all of the options would go beyond the scope of this article.

Configuring the Cluster

Before you start creating your cluster configuration and storage pools in LINSTOR, it is important to understand some basic concepts, including the various objects and the logical dependencies between them. The most important of these objects are the node, network interface, storage pool definition and storage pool, resource definition and resource, and volume definition and volume. The respective definition objects each manage information that is identical for all objects of the respective definition category throughout the cluster (i.e., on all cluster nodes). For example, a volume definition contains information that is identical on all cluster nodes, such as the size of a volume replicated across multiple cluster nodes. As a result of this principle, there are dependencies between these objects. To be able to create an initial volume, you first need to create the required objects in a sequence that reflects the dependencies.

You can use the LINSTOR client to manage the controller manually for the initial configuration. The client requires the hostname or IP address of the cluster nodes on which a controller can run to be able to communicate with the active controller. Set the LS_CONTROLLERS environment variable before you start the LINSTOR client. If you do not specify any further parameters, the client launches in interactive mode:

export LS_CONTROLLERS=192.168.133.11
linstor

Typically, you will start the configuration work with the node objects that represent the individual cluster nodes of the storage cluster. When you create a node on the controller, you define the IP address (and the port, if necessary) that the controller uses to communicate with the satellite component on the respective cluster node. You can also define – especially for more clarity in the cluster – whether the cluster node runs a controller, a satellite, or both components. The name under which the cluster node is registered on the controller is, of course, also required. Ideally, this name should match the node name of the respective cluster node. You can determine this name with the uname -n command. The domain name can be omitted if it is not absolutely necessary to differentiate between the individual nodes:

node create --node-type Combined romulus 192.168.133.11
node create --node-type satellite remus 192.168.133.12
node create --node-type satellite vulcan 192.168.133.13
node create --node-type satellite kronos 192.168.133.14

In addition to the required network interface, which you created when you created the node, you can now create further network interfaces (Figure 1). For example, if you want to use DRBD, you can use these additional network interfaces to distribute the replication links for different DRBD resources to these interfaces:

node interface create romulus drbd 192.168.144.11
node interface create remus drbd 192.168.144.12
node interface create vulcan drbd 192.168.144.13
node interface create kronos drbd 192.168.144.14
Figure 1: Overview of all created nodes and network interfaces.

After you have registered all the nodes, move on to create the respective storage pools that are available on the nodes for automatic storage management. The storage pool definition is automatically created when the first storage pool is created. Nevertheless, it is important that you are aware that this storage pool definition exists, because deleting the last storage pool does not automatically delete the storage pool definition.

When you define a storage pool, you specify what type of pool it is (e.g., an LVM Volume Group or a ZFS zpool ) and whether thick or thin provisioning is used. Depending on the type of pool, you specify the name of the volume group, the LVM thin pool, or the ZFS pool as a parameter for the LINSTOR storage pool driver. Of course, the storage pool object also has a unique name, which you can choose freely, observing the permitted characters and length restrictions.

The name DfltStorPool (Default Storage Pool) plays a special role, because the controller automatically selects this pool if you do not specify a storage pool in the various definition objects or in the resource or volume object when you create your storage resources later.

The command

storage-pool create lvmthin romulus thinpool drbdpool/thinpool

creates a storage pool named thinpool with the LVM driver (Figure 2).

Figure 2: List of created storage pools.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Persistent storage management for Kubernetes
    The container storage interface (CSI) allows CSI-compliant plugins to connect their systems to Kubernetes and other orchestrated container environments for persistent data storage.
  • Detect failures and ensure high availability
    Eliminate single points of failure and service downtime with the DRBD distributed replicated storage system and the Corosync and Pacemaker service.
  • Cloud-native storage with OpenEBS
    Software from the open source OpenEBS project provides a cloud-native storage environment that makes block devices available to individual nodes in the Kubernetes cluster.
  • Highly available Hyper-V in Windows Server 2016
    Microsoft has extended the failover options for Hyper-V in Windows Server 2016 to include two new cluster modes, as well as the ability to define an Azure Cloud Witness server. We look at how to set up a Hyper-V failover cluster.
  • The SDFS deduplicating filesystem
    Deduplicating filesystems like SDFS store redundant data, such as that created by backups, only once, thereby saving valuable disk space. Additionally, the filesystem can distribute the data to be stored across multiple computer nodes.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=