Storage innovations in Windows Server 2016

Employee of the Month

Goodbye NTFS

New Technology File System (NTFS) has more or less ended its useful life as a filesystem for storing data on Windows Server 2016 systems. Microsoft has announced the Resilient File System (ReFS) as the new default filesystem for use with virtualization. By using and verifying checksums, ReFS is protected against logical errors or incorrect bits ("bit rotting") even with very large volumes of data. The filesystem itself writes checksums, but you can also check and correct the data on the volume if necessary. Operations such as checking or repairing with CHKDSK now happen on-the-fly; there is no need to schedule downtime.

When using ReFS as storage for Hyper-V VMs, you will find some additional improvements. Generating VHDX files with a fixed size now requires only a few seconds. Unlike previously, this operation does not fill the volume with zeros first; instead, a metadata operation occurs in the background that tags the entire space as occupied in a very short time. When resolving Hyper-V checkpoints (known as snapshots in Windows Server 2012), there is no longer any need to move data. Resolving checkpoints is now a metadata operation. Even checkpoints with a size of several hundred gigabytes can be merged within the shortest possible time. This reduces the burden dramatically for other VMs on the same volume.

Better Storage Management

In Windows Server 2012 R2, IT managers can only restrict single VHDs or VHDX media. This changes fundamentally with Storage Quality of Service (QoS) in Windows Server 2016, which significantly expands the options for measuring and limiting the performance of an environment. Thanks to the use of Hyper-V (usually in the form of a Hyper-V failover cluster with multiple nodes) and a scale-out file server, the entire environment can be monitored and controlled.

By default, the system ensures that, for example, a VM cannot grab all resources for itself and thus paralyze all other VMs ("Noisy Neighbor Problem"). Once a VM is stored on the scale-out file server, logging of its performance begins. You then call these values with the Get-StorageQosFlow PowerShell command. The command then creates a list of all VMs with the measured values. These values can be used as a basis for adapting the environment, say, to restrict a VM.

In addition to listing the performance of your VMs, you can configure different rules that govern the use of resources. These rules regulate either individual VMs or groups of VMs by setting a limit or an IOPS guarantee. If you define, say, a limit of 1,000 IOPS for a group, all VMs together cannot exceed this limit. If five of six VMs are consuming virtually no resources, the sixth VM can claim the remaining IOPS for itself. Among others, this scenario targets hosters and large-scale environments that want to assign the same amount of compute power to each user or to control the compute power to reflect billing. Within a storage cluster, you can define up to 10,000 rules that ensure the best possible operations in the cluster, thus avoiding bottlenecks.

Technically, Storage QoS is based on a Policy Manager in a scale-out file server cluster that is responsible for central monitoring of storage performance. This policy manager can be one of the cluster nodes; you do not require a separate server. Each node also runs an "I/O scheduler" that is important for communication with the Hyper-V hosts. A rate limiter also runs on each node; the limiter communicates with the I/O scheduler, receiving and implementing reservations or limitations from it (Figure 2).

Figure 2: The rate limiter runs on each node and communicates with the I/O scheduler.

Every four seconds, the rate limiter runs on the Hyper-V and storage hosts; the QoS rules are then adapted if required. The IOPS are referred to as "Normalized IOPS," which means that each operation is counted as 8KB. If a process is smaller, it still counts as an 8KB IOPS operation. 32KB thus count as 4 IOPS.

Because monitoring automatically takes place on Windows Server 2016 if used as a scale-out file server, you can very quickly determine what kind of load your storage is exposed to and the kind of resources that each of your VMs requires. If you upgrade your scale-out file servers, when the final version of Windows Server 2016 is released next year, this operation alone, and the improvements in the background, will help to optimize your Hyper-V/Scale-Out File Server(SOFS) environment.

If you do not use scale-out file servers and also have no plans to change, you can still benefit from the Storage QoS functionality. According to Senthil Rajaram, Microsoft Program Manager in the Hyper-V Group, this feature is being introduced for all types of CSV data carriers. This change means you will have the opportunity to configure a restriction or reservation of IOPS if you use iSCSI or FC SAN.

Organization is Everything

The currently available version of Storage Spaces offers no way to reorganize the data. This feature is useful, for example, after a disk failure. If one disk fails, either a hot spare disk takes its place, or the free space within the pool is used (which is definitely eminently preferable to one or more hot spare disks) in order to repair the mirror. When the defective medium is replaced, there is no way to reorganize the data to restore the new volume to the same "level" as the other disks.

In Windows Server 2016, you can run reorganize using the Optimize Storage Pool command. In this process, the data within the specified pools are analyzed and rearranged, so that a similar level exists on each disc after completing the procedure. If another disk fails, all the remaining disks, and the free storage space, are available for restoring the mirror.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Storage Spaces Direct with different storage media
    In Windows Server 2016, software-defined storage lets you combine several volumes to create a central storage pool; then, you can use a combination of HDD, SSD, and NVMe SSD storage media to divide the pool into different volumes.
  • Highly available Hyper-V in Windows Server 2016
    Microsoft has extended the failover options for Hyper-V in Windows Server 2016 to include two new cluster modes, as well as the ability to define an Azure Cloud Witness server. We look at how to set up a Hyper-V failover cluster.
  • Hyper-V containers with Windows Server 2016
    The release of Windows Server 2016 also heralds a new version of Hyper-V, with improved cloud security, flexible virtual hardware, rolling upgrades of Hyper-V clusters, and production checkpoints.
  • Software-defined networking with Windows Server 2016
    Windows Server 2016 takes a big step toward software-defined networking, with the Network Controller server role handling the centralized management, monitoring, and configuration of network devices and virtual networks. This service can also be controlled with PowerShell and is particularly interesting for Hyper-V infrastructures.
  • Hyper-V with the SMB 3 protocol
    Microsoft has introduced several improvements to Windows Server 2012 and Windows Server 2012 R2 with its Server Message Block 3. Hyper-V mainly benefits from faster and more stable access to network storage. In this article, we look at the innovations.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=