Hyper-V with the SMB 3 protocol

Fast Track

RDMA and Hyper-V

Normally, every action in which a service such as Hyper-V sends data over the network – for example, a live migration – generates processor load. This is because the processor has to compose and compute data packets for the network. To do so, in turn, it needs access to the server's RAM. Once the package is assembled, the processor forwards it to a cache on the network card. The packets wait for transmission here and are then sent by the network card to the target server or client. The same process takes place when data packets arrive at the server. For large amounts of data, such as occur in the transmission of a virtual server during live migration, these operations are very time consuming and computationally intensive.

The solution to these problems is Direct Memory Access (DMA). Simply put, the various system components, such as network cards, directly access the memory to store data and perform calculations, which offloads some of the work from the processor and significantly shortens queues and processes. This approach, in turn, increases the speed of the operating system and the various services such as Hyper-V.

Remote Direct Memory Access (RDMA) is an extension of this technology that adds network functions. The technology allows the RAM content to be sent to another server on the network, as well as the direct access by Windows Server 2012/2012 R2 to the memory of another server. Microsoft already built RDMA into Windows Server 2012 but improved it in Windows Server 2012 R2 with direct integration into Hyper-V.

Windows Server 2012/2012 R2 can automatically use this technology wherever two servers with Windows Server 2012/2012 R2 need to communicate on the network. RDMA significantly increases the data throughput on the network and reduces latency in data transmission, which also plays an important role in live migration.

In Windows Server 2012 R2, a cluster node can access the memory of another cluster node during a live migration and thus move a virtual server on the fly – and extremely quickly – in a live migration. On top of this, Hyper-V in Windows Server 2012 R2 supports live migration with data compression. This new technology and the RDMA function again accelerate Hyper-V on fast networks.

Another interesting feature is Data Center Bridging in Windows Server 2012 R2, which implements technologies for controlling traffic on very large networks. If the network adapters support the Converged Network Adapter (CNA) function, data access can be improved based on iSCSI disks or RDMA techniques – even between different data centers. You can also limit the bandwidth that this technology uses.

For fast rapid communication between servers based on Windows Server 2012 R2 – especially on cluster nodes – the NICs need to support RDMA. It makes sense to use this, especially for very large amounts of data – for example, if you are using Windows Server 2012/2012 R2 as a NAS server (i.e., as an iSCSI target) and store databases from SQL Server 2012/2014 there. To a limited extent, SQL Server 2008 R2 can also use this function, but not Windows Server 2008 R2 or older versions of Microsoft SQL Server.

Optimal Use of Hyper-V in a Network Environment

If you use multiple Hyper-V hosts based on Windows Server 2012 R2 in your environment, these hosts can, as mentioned previously, use the new multichannel function for parallel access to data. This technology is used, for example, if the virtual disks of your virtual servers reside not on the Hyper-V host but on network shares.

The approach speeds up the traffic between Hyper-V hosts and virtual servers and also protects virtualized services against the failure of a single SMB channel. To do this, you do not need to install a role service or change the configuration. All of these benefits are integrated out of the box in Windows Server 2012 R2.

To make optimal use of these features, the network adapters must be fast enough. Microsoft recommends the installation of a 10GB adapter or the use of at least two 1GB adapters. You can also use the teaming function for NICs in Windows Server 2012 R2 for this. Server Manager can group network adapters in teams, even if the drivers do not support this. These teams also support SMB traffic.

SMB Direct is also enabled automatically between servers running Windows Server 2012 R2. To be able to use this technology between Hyper-V hosts, the built-in network adapters must support RDMA (Remote Direct Memory Access) and be extremely fast. Your best bets are cards for iWARP, InfiniBand, and RDMA over Converged Ethernet (RoCE).

Storage Migration

In Windows Server 2012 R2, you have the option of changing the location of virtual disks on Hyper-V hosts. You can do this on the fly on the virtual server. To do this, right-click in Hyper-V Manager on the virtual server whose disks you want to move and select Move from the menu; the Move Wizard then appears.

In the wizard, you choose to Move the virtual machine's storage as the Move Type, then you need to decide whether you want to move only the configuration data from the virtual server or the virtual hard disks, too (Figure 1). Finally, select the folder in which you want Hyper-V to store the data for the computer. The virtual server continues to run during the process. You can see the status of the operation in Hyper-V Manager (Figure 2). While moving the data, users are not cut off from the virtual server; the whole process takes place transparently.

Figure 1: Virtual hard disks on servers can be moved with the help of a wizard.
Figure 2: You can check the status when moving virtual disks in Hyper-V Manager.

Besides the configuration snapshots and the virtual disks, you can store smart paging files separately. Smart paging is designed to prevent virtual servers from failing to start because the total available memory is already assigned. If you are using Dynamic Memory, the possibility exists that other servers on the host are using the entire memory.

The new Smart Paging feature allows virtual servers to use parts of the host's hard disk as memory for the reboot. Again, you can move this area separately. After a successful boot, disk space is released again, and the virtual server regains its memory through dynamic memory.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=