Lead Image © scyther5, 123RF.com

Lead Image © scyther5, 123RF.com

Hyper-V containers with Windows Server 2016

High Demand

Article from ADMIN 36/2016
By
The release of Windows Server 2016 also heralds a new version of Hyper-V, with improved cloud security, flexible virtual hardware, rolling upgrades of Hyper-V clusters, and production checkpoints.

Gone are the days when Hyper-V was trying to catch up with the basic functionality of VMware. After the release of Windows Server 2012 R2, Microsoft's hypervisor drew level with vSphere for most applications. Therefore, the new version of Hyper-V in Windows Server 2016 mainly offers improvements for demanding environments, such as snapshots and improved security in the cloud.

Initially, Microsoft intended to release Windows Server 2016 simultaneously with Windows 10, but they dropped this plan in the beta phase and moved Server 2016 to an unspecified date in the second half of 2016. The tests for this article were conducted before the final release and are based on Technical Preview 4.

New File Format for VMs

A few things quickly catch the eye when you look at the new Hyper-V: The role of the hypervisor in Windows Server 2016 can only be activated if the server's processors support second-level address translation (SLAT). Before, this function was only required for the client Hyper-V; however, now, it is also required for the server. Older machines might have to be retired.

When you create new virtual machines (VMs) in Hyper-V, the new file formats become apparent. Thus far, Hyper-V has used files in XML format to configure the VMs, but now the system relies on binary files. This eliminates a convenient way of quickly reviewing the configuration of a VM in the XML source code. The official message from Microsoft here is that the new format is much faster to process than the XML files, which need to be parsed in a computationally expensive way in large environments. Rumor has it that support reasons were also decisive: All too often, customers directly manipulated the XML files, creating problems that were difficult for Microsoft's support teams to fix. With binary files, this problem is a thing of the past.

If administrators move existing VMs from one Hyper-V host with Windows Server 2012 R2 to a Windows 2016 host, the VMs initially remain unchanged. So far, implicit conversion had always taken place in such a scenario: If you moved a VM with Export and Import or by live migration from a Windows 2012 host server to 2012 R2, Hyper-V quietly updated the configuration to the newer version, thus cutting off a way back to its original state.

The new Hyper-V version performs such adjustments only if the administrator explicitly requests it, allowing for "rolling upgrades" in cluster environments in particular: If you run a Hyper-V cluster with Windows Server 2012 R2, you can add new hosts directly with Windows Server 2016 to the cluster. The cluster then works in mixed mode, which allows all the VMs to be migrated back and forth between the cluster nodes (Figure 1). IT managers can thus plan an upgrade of their cluster environments in peace, without causing disruption to the users. Once all the legacy hosts have been replaced (or reinstalled), the cluster can be switched to native mode and the VMs updated to the new format. Although Microsoft indicates in some places on the web that the interim mode is not intended to be operational for more than four weeks, this is not a hard technical limit.

Figure 1: The new version of Hyper-V can operate multiple parallel virtual hardware versions, as the second column under Virtual Machines shows. This allows gradual cluster upgrades.

Flexible Virtual Hardware

Hyper-V in Windows Server 2016 brings some interesting innovations for the VM hardware. For example, it will be possible in the future to add virtual network adapters or memory to a virtual machine during operation and to remove these resources without interruption (Figure 2). Previously, Hyper-V could take memory away from an active VM when the Dynamic Memory feature was enabled. The new release makes this possible for VMs with static memory.

Figure 2: RAM and network cards can be added to a VM during operation, as well as removed. The diagram on the right shows how memory usage first shrinks and then grows again – the load rises and falls suddenly.

Of course, caution is advisable: Obviously, an application can crash if it suddenly has less memory. Of course, Hyper-V doesn't just simply take away RAM access if actively used data resides within, but an application can still react badly if its developer has not considered a scenario in which all of a sudden there's less RAM. Conversely, it can also happen that a program does not benefit from suddenly having more RAM available. For example, Exchange only checks when starting its services to see how much memory is available and ignores any changes to it.

A new and still experimental feature is actually quite old, from a technology point of view: Microsoft has a Discrete Device Assignment through which certain hardware devices of the host are passed to a VM and can then be managed by the VM. Although the basics of this technology existed in the first Hyper-V release from 2008, it will only become directly usable with Windows Server 2016. However, you should not overestimate its abilities: Discrete Device Assignment does not bind just any old host hardware to a VM; instead, it is used to communicate directly with certain Peripheral Component Interconnect Express (PCIe) devices. For this to work, many components need to cooperate, including not just the drivers, but also the hardware itself.

SR-IOV network cards that support Hyper-V for Windows Server 2012 compose the first device class to use this technique. These special adapters can be split up into multiple virtual Network Interface Cards (NICs) on a hardware level and can then be assigned exclusively to individual VMs. The novel Non-Volatile Memory Express (NVMe) storage drives are the second use case for which Microsoft is making the feature more widely available; this is a new class of SSDs for enterprise use. High-end graphics cards will also become eligible in the future. In a blog, the developers also write of (partly successful, partly failed) experiments with other devices [1].

New Options in Virtual Networks

At the network level, the future Hyper-V comes with some improvements to detail. For example, vNIC Consistent Naming is more of a convenience feature. A virtual network adapter can be equipped with a (meaningful) name on the host level, and it then shows up in the VM operating system with the name of the card. This makes mapping virtual networks far easier in complex environments.

Additions to the extensible virtual switch are more complex. Compared with the previous generation, this virtual switch now provides an interface for modules that can expand the switch's functionality. One new feature is primarily designed for the container technology (more on that later), but it is also useful in many other situations, because a virtual switch can now serve as a network address translation (NAT) device. The new NAT module extends the virtual switch from pure Layer 2 management to partial Layer 3 integration. With the new feature, the switch can forward network packets not only to the appropriate VM, but it can also handle IP address translation operations that are otherwise handled by routers or firewalls.

Through the integration of NAT, you can hide the real IP address from the outside world on a VM. Also, this offers a simple option for achieving a degree of network isolation (e.g., for laboratory and test networks) without cutting off external traffic. Desktop virtualizers such as VMware Workstation and Oracle VirtualBox have long offered such functions. Seeing that Hyper-V is also included in Windows 10 as a client, the new feature will come in handy there.

Another network innovation can be found at the other end of the virtual switch: its connection to the host hardware. Switch-Embedded Teaming (SET) lets you team multiple network cards directly at the level of the virtual switch. In contrast to earlier versions, you do not need to create the team on the host operating system first, making it far easier to automate such setups.

SET offers a direct advantage for environments with high demands on network throughput, because a SET team can also contain remote direct memory access (RDMA) network cards. These adapters allow a very fast exchange of large amounts of data and cannot be added to a legacy team. There are, however, disadvantages compared with the older technique; for example, SET network cards must be the same, which is not necessary for host-based teaming.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=