5.3. Virtualization and Power Management

Benefits of Virtualized Data Center Deployments 1/3

To more easily motivate the role of virtualization in large-scale data centers, we consider the benefits that virtualization provides over managing physical infrastructures. In non virtualized environments, the decision of mapping workloads to physical servers is predominantly a static optimization. Even if the idle resources can be placed into very low power states, the costs of the hardware, as well as the costs associated with the power and cooling infrastructures they require, are not recovered with productive computations. Instead, if one can provision resources equal to the peak of sum resource usages between the two applications, we can reduce the overall energy-efficient virtualized systems capacity.

To do this, however, we must have a means of treating data center capacity as a fungible pool of resources that can be dynamically allocated to applications at run time based on demand. Attaining true resource fungibility at data center scale is a challenging problem. Virtualization technologies provide a significant first step toward realizing this vision. Specifically, virtualization allows data center management systems to dynamically adapt resource allocations to guest virtual machines and move workloads between physical servers when necessary.

While similar functionality can be achieved without virtualization support, doing so requires substantial changes to existing application and operating system software. Therefore, by incorporating virtualization into data centers, we start to achieve the agility necessary for resource fungibility while preserving compatibility and support for existing software stacks. Observing the expanding demand for virtualization, multiple solutions have been developed and received significant adoption, including the open source Xen system, VMware's ESX hypervisor, and Microsoft's Hyper-V software.

Power Management Requirements for Virtualized Systems 2/3

Managing energy in server environments is often geared toward one of two goals:

  • To optimize the active power consumption of physical servers while maintaining acceptable quality of service for the hosted application, with a typical approach being to reduce processor voltage/frequency states based on processor utilizations.
  • To treat power as a constraint via server power budgets. Such power capping can be used to provision additional physical servers within the limited power capacity of a data center. Here, the goal is to prevent stranding of power by provisioning power based on an average for typical power consumption instead of the peak usage per server. This type of power over committing can be performed safely as long as there are power-capping mechanisms available to enforce power consumption limits when necessary.

Both of these goals affect both platform-level and distributed management policies.

At the platform level, whether optimizing power consumption during the execution of an application or enforcing power caps on a server, the system must carefully balance the requirements of the application with the performance/power characteristics of underlying power management states.

For example

considering dynamic voltage and frequency scaling of processors, policies may carefully toggle processor performance states to optimize energy consumption while meeting quality of service in terms of real time execution constraints.

In the case of physical servers, there is an implicit assignment of a single application to a platform, thereby allowing policies that are tuned for the application and hardware to drive the underlying power management performed by the operating system.

In virtualized systems, however, there is no direct way to map this type of application-specific policy feedback since each physical server hosts multiple, possibly heterogeneous, applications with varying power/performance trade-offs. Moreover the management of physical hardware states is controlled by the hypervisor and management partition, further preventing guest virtual machines from directly toggling power states.

Managing power across distributed servers requires coordination across the many localized management entities on different platforms. For efficient distribution of power resources, this requires some level of interaction and coordination between servers so that, for example, when one system is not using its allocated power capacity, that capacity can be provisioned to others.

In the case of virtualized servers, each physical server hosts a set of virtual machines, adding an additional level of hierarchy to the system. Hence, where before distributed policies may have intelligently managed resources between physical machines, there must now be some additional level of awareness that allows for management across both physical platforms and the virtual instances they host. Realizing this goal requires coordination across the virtualization layers controlling each of the distributed physical platforms.

Bibliography 3/3

1

Wu-chun Fen (Editor): THE GREEN COMPUTING BOOK: Tackling Energy Efficiency at Large Scale.

Virginia Polytechnic Institute and State University Blacksburg, USA.




Projekt Cloud Computing – nowe technologie w ofercie dydaktycznej Politechniki Wrocławskiej (UDA.POKL.04.03.00-00-135/12)jest realizowany w ramach Programu Operacyjnego Kapitał Ludzki, Priorytet IV. Szkolnictwo wyższe i nauka, Działanie 4.3. Wzmocnienie potencjału dydaktycznego uczelni w obszarach kluczowych w kontekście celów Strategii Europa 2020, współfinansowanego ze środków Europejskiego Funduszu Społecznego i budżetu Państwa