Storage cluster management with LINSTOR

Digital Warehouse

Working with LINSTOR

Now that at least a minimal LINSTOR configuration has been completed, you can define the initial resources and their volumes and generate them on one or more cluster nodes. It is often a good idea to start with a simple configuration first to check that all the components are working properly. Therefore, the first example only creates a single local LVM volume. As a first step, you need to create the definition for the resources whose volumes use only the storage layer and select LocalData as the name for the resource definition, and thus also for their resources:

resource-definition create --layer-list storage LocalData

Now, add a single volume of 150MB to the resource definition named LocalData :

volume definition create LocalData 150m

Finally, on the cluster nodes vulcan and kronos , create a resource associated with these definitions with the thinpool storage pool for the volume of the respective resource:

resource create --storage-pool thinpool vulcan kronos LocalData

If these steps complete without error, the list of LVM logical volumes on cluster nodes vulcan and kronos should now contain an entry for a volume named LocalData_00000 (Listing 1). You can then format and mount this LVM logical volume in the usual way.

Listing 1

LVM Logical Volumes

vulcan ~ # lvs drbdpool
  LV               VG        Attr        LSize    Pool      Origin Data%  Meta%  Move
  LocalData_00000  drbdpool  Vwi-a-tz--  152.00m  thinpool  0.04
  thinpool         drbdpool  twi-aotz--  300.00m            0.02          10.94

The next application example will be a bit more sophisticated: This time there will be a resource with a volume that replicates triple-redundantly on three of the cluster nodes with DRBD. When you create the resource definition, you therefore select the layers drbd and storage in the layer list, and the resource is given the name SharedData to match the replication of the volume:

resource-definition create --layer-list drbd,storage SharedData

Again, add a single volume of 150MB to this resource:

volume definition create SharedData 150m

This time, leave the selection of the cluster nodes to the LINSTOR controller by specifying only the required redundancy for the --auto-place option:

resource create --storage-pool thinpool --auto-place 3 SharedData

The list of resources and volumes shows which nodes the controller has selected for the replicated DRBD resource and which resources are automatically available for it. In this case, the cluster nodes kronos , remus , and romulus were automatically selected to provide the required triple-redundancy of the volume replicated by DRBD. TCP/IP port 7000 was reserved for network communication by the DRBD resource, and the DRBD volume was assigned a minor number of 1000 (Figure 3). The volume can be found as the drbd1000 entry in the /dev directory. The actual storage space for the data is again provided by an LVM logical volume, which appears in the output of the lvs drbdpool command as SharedData_00000 .

Figure 3: Lists of resources and volumes.

Now that storage management is automated, retroactive modification of existing storage resources is very easy. For example, you can easily migrate one of the existing replicas to another cluster node by first adding a fourth replica and then removing one of the original replicas. If you wait for a DRBD resync, the triple-redundancy of the volume is never compromised.

As an example, migrate the DRBD-Resource replica between the cluster nodes kronos and vulcan . First add the replica to the vulcan node by assigning a LINSTOR resource to the node:

resource create -s thinpool vulcan SharedData

After DRBD has completed the resync, you can delete the replica on kronos :

resource delete kronos SharedData

To remove a resource from all cluster nodes permanently, you can also delete the resource definition of this resource directly; that is, the resource is first deleted from all cluster nodes on which it was created:

resource definition delete LocalData

The resource definition, including the volume definitions it contains, is automatically deleted only after all cluster nodes have reported a successful cleanup to the controller.

Working with Snapshots

For volumes located in a storage pool based on thin provisioning (i.e., currently, storage pools with the lvmthin and zfsthin drivers), LINSTOR also provides cluster-wide snapshot functionality, not only for local storage volumes, but also for storage volumes replicated by DRBD.

To create snapshots of replicated volumes that are identical on all cluster nodes involved, LINSTOR stops I/O activity on the resource in question at the DRBD level. As a result, the data on the back-end storage volume used by DRBD does not change, and different cluster nodes can create the snapshot at different times. I/O is not released again at the DRBD level until the snapshot has been created on all cluster nodes.

Creating snapshots is similar to the process for resources. However, you do not have to take a snapshot on all cluster nodes on which the resource exists. Instead, you select the cluster nodes on which the snapshot will be created when you grab it:

snapshot create romulus remus SharedData Snap1

You can use a snapshot either to create a new resource based on the snapshot's dataset (snapshot resource restore) or to roll back the resource from which you took the snapshot to the snapshot version (snapshot rollback).

However, both actions can only be performed on the cluster nodes on which the snapshot is available. It is easier to restore the snapshot to a new resource. To do this, first create an "empty" resource definition (without volume definitions) for the new resource:

resource-definition create SharedData_Restore
volume definition create SharedData_Restore 150m

You can then restore the snapshot to the new resource:

snapshot resource restore --from-resource SharedData --from-snapshot Snap1 --to-resource SharedData_Restore romulus remus

When restoring a snapshot, it is again possible to select a subset of the cluster nodes on which the snapshot is available.

Resetting a replicated resource from which a snapshot was taken to the snapshot version is a little more complicated if the snapshot is not available on all cluster nodes on which the resource was created. In this case, you first have to remove the resource from the cluster nodes where no snapshot is available:

resource delete vulcan SharedData

LINSTOR may leave the resource as a client resource without back-end storage as a quorum tiebreaker resource if this feature is enabled. However, the tiebreaker resource can also get in the way of the dataset reset. You can disable the tiebreaker feature for this resource by manually deleting the tiebreaker resource. In this case, simply repeat the resource delete command. The resource is then reset to the snapshot version with the command:

snapshot rollback SharedData Snap1

Of course, after resetting the dataset, further replicas of the replicated resource can be added to the cluster by creating the respective resource again on additional cluster nodes – which, as expected, requires a resync of the dataset:

resource create --storage-pool thinpool vulcan kronos SharedData

Snapshots are retained even if the original resource from which they were created is deleted. In the LINSTOR object hierarchy, snapshots are linked to the resource definition (Figure 4). Therefore, you cannot delete them until you remove all snapshots.

Figure 4: Status of resources after a snapshot restore and a resync.

Integration with Virtualization and Container Platforms

Integration with various platforms that need to provide storage volumes automatically is more interesting for storage automation than manual operation of the storage cluster with the LINSTOR client. Included are, on the one hand, popular virtualization platforms like OpenStack, OpenNebula, or Proxmox and, on the other, container-based platforms like Kubernetes.

LINSTOR can be integrated into these platforms by means of appropriate drivers so that, for example, when creating a new virtual machine (VM) in OpenNebula, the virtual system disk for this VM is automatically created according to a profile provisioned in LINSTOR. These profiles are known as resource groups and can be used to specify certain properties, such as which storage pool to use, a replica count for replication with DRBD, or the integration of a data deduplication layer.

The respective resource definition, volume definitions, and corresponding resources are created automatically, and various options are also automated. For most platforms, this means that the resources can be allocated in the easiest possible way. For example, the name of the resource definition is chosen to match the name of the respective VM.

The drivers for the respective platforms are available from separate GitHub projects [2]; their names usually start with the prefix linstor- (e.g., linstor-proxmox and linstor-docker-volume).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Persistent storage management for Kubernetes
    The container storage interface (CSI) allows CSI-compliant plugins to connect their systems to Kubernetes and other orchestrated container environments for persistent data storage.
  • Detect failures and ensure high availability
    Eliminate single points of failure and service downtime with the DRBD distributed replicated storage system and the Corosync and Pacemaker service.
  • Cloud-native storage with OpenEBS
    Software from the open source OpenEBS project provides a cloud-native storage environment that makes block devices available to individual nodes in the Kubernetes cluster.
  • Highly available Hyper-V in Windows Server 2016
    Microsoft has extended the failover options for Hyper-V in Windows Server 2016 to include two new cluster modes, as well as the ability to define an Azure Cloud Witness server. We look at how to set up a Hyper-V failover cluster.
  • The SDFS deduplicating filesystem
    Deduplicating filesystems like SDFS store redundant data, such as that created by backups, only once, thereby saving valuable disk space. Additionally, the filesystem can distribute the data to be stored across multiple computer nodes.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=