Policy-Based Power Management
Imagine this scenario: Your data center architecture has been
deliberately oversized to exceed periodic peak loads, but most servers
never approach their theoretical power and cooling requirements.
Those that do likely run hot at periodic intervals, while rows of
underutilized servers draw only a fraction of their allocated resources.
Virtualization has helped mitigate this inefficiency by reducing the
number of physical servers required in the data center. However,
significant additional savings are possible through an array of
optimization techniques that reduce energy consumption by better
balancing resource requirements against workloads and address
environmental concerns for greenhouse emissions. After all, power
accounts for some 25 percent of typical data center operational
costs, and McKinsey estimates that, combined, today’s data centers
emit as much carbon dioxide as all of Argentina.12 For large, highly
virtualized data centers, policy-based power management schemes
can pay off quickly.
Five Approaches to Policy-Based Power Management
The following five usage scenarios optimize productivity per watt to reduce TCO in highly virtualized data centers. Together they monitor
and cap power in real time at server, rack, zone, and data center levels to manage aggregated power consumption and load migration on
available power and cooling resources.
Power Consumption Benefits
1. Real-time server
Provides insight into how much power is consumed, so
HVAC output can be scaled to the specific need or heat
load rather than cooling to a theoretical maximum. Virtual
machines (VMs) can be relocated from power-constrained
systems to unconstrained systems within the cluster or
across different clusters.
Manage data center hotspots.
Reduce chances of hardware failure.
2. Rack density
Maximize available compute resources for increased server
density at the same overall power envelope per rack.
Reduce tendency to overprovision power
Optimize rack utilization in host data
centers with customer power allocation.
3. Power load balancing
Dynamically balance resources by building workload profiles
and setting performance loss targets. You can match actual
performance against service level agreements (SLAs).
Utilize existing power more efficiently.
Meet SLA requirements more precisely.
12 Kaplan, James M., et al. Revolutionizing Data Center Energy Efficiency (July 2008).