Skip to main content

Saving energy one computer at a time

Posted: 

xiaorui_04_med.jpg
Associate Professor Xiaorui Wang
Fueled by the increasing popularity of online services—from online banking to video sharing—and cloud computing, the number and size of computer data centers continues to grow. As the number of data centers increases, so does the amount of energy needed to power them. In addition to high electricity bills and significant environmental implications, increased power consumption may lead to system failures caused by power capacity overload or overheating. With the cost to power data centers now outpacing the cost to equip them, Xiaorui Wang, associate professor of electrical and computer engineering, sees an enormous opportunity to improve both the efficiency and the reliability of data centers.

Consisting of thousands to hundreds of thousands of servers, data centers accounted for roughly two percent of the total energy consumed nationwide in 2010, according to an analysis by Jonathan Koomey, a research fellow at Stanford University. In the race to build these centers, speed and functionality have taken precedence over efficiency.

“US data centers had an annual energy bill of $8.4 billion dollars in 2011. We hope to cut it by half, or more,” says Wang. “That’s a big potential impact.”

Wang and his research team focus on power and thermal management for all classes of computers—from desktops and laptops to smartphones to data centers—in order to make them much more energy efficient and reliable.

Data centers are a particularly good example of just how much room there is for improvement. Servers consume too much power, even while idling. Cooling consumes 40 to 50 percent of power and temperature points must be set extra low due to the lack of a good mechanism to monitor and correct hot spots. In addition, seven to 18 percent of energy is lost during power conversion.

“The three major components in data centers are all wasting too much energy, so we are looking at each of them to make the entire data center consume much less power,” says Wang.

Wang first became inspired to conduct research in this area during a summer internship at IBM Austin Research Lab in Texas.

“Together with my mentors at IBM, we designed a powercontrol algorithm that has made a real impact on IBM’s server business,” explains Wang. “I was inspired by that real impact, on a real industry, and decided that this area is something very important that I can definitely contribute to.”

The server-level control loop which Wang developed that summer was adopted as the basis of the power-capping feature used in the IBM Active Energy Manager product. It provides power control for 13 IBM server models and is now shipped in more than a billion dollars’ worth of IBM servers every year.

Since then, Wang’s group has developed a complete hierarchical power-control framework for datacenters that addresses each layer, from an individual microprocessor, to a server, to a server rack, to the entire data center.

“The power control framework is the most important research outcome of our lab,” he explains.

Today, Wang and his team of 10 PhD students are working on multiple research projects funded by the National Science Foundation (NSF), Office of Naval Research (ONR), Microsoft Research and IBM. He is the principal investigator for five ongoing NSF projects to design and develop power/thermal control and energy optimization algorithms at different levels. The levels range from multi-core microprocessors, to virtualized servers, to a single data center, up to a network of cloud-scale data centers deployed across the world. Various applications are also being considered, such as multi-tier web services, real-time task scheduling, networked embedded applications and database management systems. Wang also manages two ONR funded projects to develop energy-efficient embedded systems and real-time sensor applications.

Wang’s research achievements have been widely recognized. In 2012, his paper on hierarchical power control for large-scale data centers was featured as the spotlight paper for the January 2012 issue of the IEEE Transactions on Parallel and Distributed Systems. Wang also received the ONR Young Investigator Award in 2011, the NSF CAREER Award in 2009, the Power-Aware Computing Award from Microsoft Research in 2008, and the IBM Real-Time Innovation Award in 2007.

Improving the energy efficiency and thermal management of smartphones is another of Wang’s key research areas. Better batteries will not be the miracle cure to the energy bottleneck for smart phones, he says. Considering energy and thermal management early in the design, however, will be key.

“We hope to help make the battery lifetime of smart phones last at least as long as traditional cell phones, or even longer,” says Wang.

This article is reprinted from the Electrical & Computer Engineering 2011-2012 Annual Report

Categories: FacultyResearch