Power Budgeting in the Virtual Data Center

The virtualization revolution was partly driven by a need to save on energy costs, but further improvements won't happen unless system vendors give up the old paradigms for power budgeting.

One of the forces driving the trend toward data center virtualization is power usage. When IBM unveiled a server consolidation strategy in 2007, they chose the name “Big Green Initiative” to underscore the theme of environmental friendliness through power savings. And energy savings is still a popular argument for virtualization. Wouldn’t you rather pay the electric bill for a single server running several virtual systems instead of a whole old-fashioned server room?

Despite the green promises, however, the real savings from virtualization top out somewhere below the potential, and one reason is that many of the power-saving strategies used on today’s server systems are still designed for conventional, hardware-based operations without accounting for the benefits and challenges of virtualization.

According to a paper presented at the 2011 Usenix Annual Technical Conference, the solution is to develop a system for power management at the virtual machine level, tailored to the nuances of the virtual environment, rather than extending conventional techniques that focus on the hardware. In their paper “Power Budgeting for Virtualized Data Centers,” authors Harold Lim, Aman Kansal, and Jie Liu argue that “… current power budgeting methods enforce capacity limits in hardware and are not well suited for virtualized servers because the hardware is shared among multiple applications.”

Power budgeting techniques used in data centers, such as Dynamic Voltage and Frequency Scaling (DVFS), affect the server itself or, at times, a collection of processors attached to the server. In today’s distributed environment, however, VMs associated with a single application might run on multiple servers. The authors call for a solution that supports application-level and VM-tier-level granularity.

This technique, which the authors call Virtualized Power Shifting (VPS) is designed for data centers in which a single large application might run on hundreds of virtual machines. The VPS concept defines a multitiered approach to data center power management, with power control occurring at three different levels:

  • A data-center-level controller controls the total power usage for the data center.
  • Application-level controllers manage power usage for the applications running within the data center (each of which might include many VMs).
  • Multiple tier-level controllers manage the VMs within an application tier. A tier might contain a portion of the VMs allocated to a specific distributed application.

By fine-tuning the power settings at each of these levels, the data center will have a wide range of options for controlling and optimizing power usage.

Techniques such as VPS are still at the research stage, but some big players are already involved with the quest for better power management on virtual systems. (Two of the authors of the VPS paper work for Microsoft.) You can expect to see power-saving technologies such as VPS reach real products in the next few years.

For more on power budgeting with VPS, see the full text of “Power Budgeting for Virtualized Data Centers” at the Usenix site. http://www.usenix.org/events/atc11/tech/final_files/Lim.pdf