Software-defined networking with Windows Server 2016

Virtual Cables

Faster Live Migration

Normally, the processor manages the control of data and its use on a server. The processor is burden by each action when a server service such as Hyper-V sends data via the network (e.g., during live migration), which takes up computing capacity and time; above all, users are disconnected from their services, which in turn are saved on the virtual servers because the processor needs to create and calculate data packages for the network. To do so, it in turn needs access to the server's memory. If the package is completely calculated, the processor directs it to a cache on the network card. The package then waits there for transmission and is then sent from the network card to the destination server or the client. Conversely, the same process takes place when data packages arrive at the server. If a data package reaches the network card, it is then directed to the processor for further processing. These operations are very time consuming and require considerable processing power for the large amounts of data that are incurred, for example, when transferring virtual servers during live migration.

The solution to these problems is called Direct Memory Access (DMA). Simply put, the various system components (e.g., network cards) can access memory directly to store data and perform calculations. This relieves pressure on the processor and significantly shortens queues and operations, which in turn increases the speed of the operating system and of server services such as Hyper-V.

Remote DMA (RDMA) is an extension of this technology to include network functions. The technology allows the memory content to be transmitted to another server on the network and to access the memory of another server directly. Microsoft had already integrated RDMA in Windows Server 2012, but they improved it for Windows Server 2012 R2 and installed it directly in Hyper-V.

The technology is again expanded in Windows Server 2016; above all, performance is improved. The new Storage Spaces Direct can also use RDMA and even presuppose such network cards (see the "SDN for Storage Spaces Direct" box). Additionally, RDMA supports SET for teaming network adapters in Hyper-V. Windows Server 2012/2012 R2 can use the technology automatically when two such servers communicate on the network. RDMA significantly increases data throughput on the network and reduces latency when transmitting data and also plays an important role in live integration.

SDN for Storage Spaces Direct

The SDN functions in Windows Server 2016 are optimized for using Storage Spaces Direct. The local data storage of Windows 2016 servers can be combined in a virtual storage pool using this technology, which is available to all nodes in the cluster and also can be used for saving virtual servers and their data. The communication takes place via the network in this case. All components should therefore be able to support and optimally connect the new functions. To supplement this feature, a new Windows Server 2016 capability replicates entire disks into other data centers. This "storage replication" is ideal, for example, for geo-clustering, which also uses the new SDN functions of Windows Server 2016.

Also interesting is data center bridging, which controls data traffic in very large networks. If the network adapters used have the Converged Network Adapter (CNA) function, data can be better used on the basis of iSCSI disks or RDMA techniques – even between different data centers. Moreover, you can limit the bandwidth that this technology uses.

For quick communication between servers based on Windows Server 2016 – especially on cluster nodes – the network cards must support RDMA on the servers, which is particularly worth using for very large amounts of data. In this context, Windows Server 2016 also improves cooperation between different physical network adapters. Already in Server 2012 R2, the adapters can be grouped into teams via the Server Manager, although this is no longer necessary in Windows Server 2016 because – as briefly mentioned earlier – SET allows you to assign multiple network adapters as early as when creating virtual switches. You need the SCVMM or PowerShell to do this. In this way, it is possible to combine up to eight uplinks in a virtual switch, as well as dual-port adapters.

In Server 2016, you use RDMA-capable network adapters that are directly grouped together in a Hyper-V switch on the basis of SET. The virtual network adapters of the individual hosts access virtual switches directly, but the VMs can also access the virtual switch directly and take advantage of the power of the virtual Hyper-V switch with SET (Figure 5).

Figure 5: In Windows Server 2016, you can combine different network adapters into a team within a virtual switch.

SET in Practice

The PowerShell command used to create a SET switch is pretty simple:

new-vmswitch -name SETswitch -Netadaptername "net01", "net02" -AllowManagementOS $True

If you don't want to allow the team to manage the operating system, use the -AllowManagementOS $False option. Once you have created the virtual SET switch, you can retrieve its information with the GET-VMSwitch command, with which you also can see the different adapters that are part of the virtual switch. More detailed information can be found with the

Get-Vmswitchteam <Name>

command, which displays the virtual switch settings (Figure 6). If you want to delete the switch again, use Remove-VMSwitch in PowerShell. Microsoft provides a Guide [10] you can use to test the technique in detail.

Figure 6: Retrieving information about SET in PowerShell.

Because RDMA supports SET, the technology is also cluster-capable and significantly increases the performance of Hyper-V on the network. The new switches support RDMA, as well as Server Message Block (SMB) Multichannel and SMB Direct, which is activated automatically between servers with Windows Server 2016. The built-in network adapters, and the teams and virtual switches too, of course, need to support the RDMA function for this technology to be used between Hyper-V hosts. The network needs to be extremely fast for these operations to work optimally. Adapters with the iWARP, InfiniBand, and RDMA over Converged Ethernet (RoCE) types are best suited.

Hyper-V can access the SMB protocol even better in Windows Server 2016 and can, in this way, outsource data from the virtual servers on the network. The purpose of the technology is that companies don't store the virtual disks directly on the Hyper-V host or use storage media from third-party manufacturers but instead use a network share of a server with Windows Server 2016, possibly also with Storage Spaces Direct. Hyper-V can then very quickly access this share with SMB Multichannel, SMB Direct, and Hyper-V over SMB. High-availability solutions such as live migration also use this technology. The shared cluster disk no longer needs to be in an expensive SAN. Instead, a server  – or better, a cluster – with Windows Server 2016 and sufficient disk space is enough. The configuration files from the virtual servers and any existing snapshots can be stored on the server or cluster.

Conclusions

Microsoft now provides a tool for virtualizing network functions, in addition to servers, storage, and clients. The standard Network Controller in Windows Server 2016 allows numerous configurations that would otherwise require paid third-party tools and promises to become a powerful service for the centralized management of networks. However, this service is only useful if your setup is completely based on Windows Server 2016, including Hyper-V, in the network. Setting up the service and connecting the devices certainly won't be simple, but companies that meet this requirement will benefit from the new service, which includes the parallel use of IPAM, which also can be connected to the Network Controller. Together with Hyper-V network virtualization and SET, networks can be designed much more efficiently and clearly in Hyper-V. Additionally, the power of virtual servers increases, as do transfers during live migration.

The employment of System Center 2016 is necessary so that SCVMM and Operations Manager can make full use of and monitor functions. However, this doesn't make network management easier, because Network Controllers and the System Center also need to be mastered and controlled.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=