Skip Over Navigation Links
Interface Online Center for Information Technology (CIT)
space

Spring 2008 [Number 240]     Printable Version Printable version (1,196KB PDF)     Download Adobe Reader

Index

Previous

Next

Energy Efficient Equipment in the NIH Data Center

What does energy efficiency mean in a data center? It means saving costs, saving space, and saving power. That’s it in a nutshell. But...it’s actually not that simple, and there are two parts to this equation. On the one hand, customers can increase their energy efficiency by using virtual servers instead of dedicated servers, and by using blade servers instead of stand alone servers. However, such measures can, paradoxically, also increase the demand on your power system because of the cooling issues involved.

One of a data center’s main energy drains is the need to keep the servers consistently cool. This task is more difficult today because the new servers, although more energy efficient, generate more heat. While compact Blade servers take up less space, they concentrate a lot of heat in a small space, necessitating greater cooling efforts. Reducing the number of physical servers by opting for virtualization means the remaining servers are working harder and hotter, which puts more demand on your cooling units. Finding the right equation of power density and cooling requires maintaining a delicate balance and constant monitoring.

So the NIH Data Center, which is operated by CIT, is doing its part by upgrading the air conditioning systems and by installing sophisticated monitoring software that keeps a constant check on temperature, humidity, generators, leak detection equipment, power usage levels, hydrogen levels, and fire detection panels. And to make the humans as efficient as the equipment, these monitors can be read from any device that has a web connection.

Blade servers

Let’s look at the customer side of the equation. Customers can save power, space, and costs by using blade servers and virtual servers. Blade servers are small servers that are stacked into one cabinet. A single cabinet might hold up to 66 blades, depending on design and manufacturer. That obviously saves space, but it also saves power.

For example, an individual server will have at least two cables connecting it to the power supply. Sixty-six servers will have at least 132 cables. Those cables will run under the computer room floor and interfere with air flow, resulting in a hotter environment. With blades, however, the entire cabinet of blades will have only two cables, leaving more area for air flow under the floor. Better circulation of air means less power expended in cooling.

Also, today's blades are more efficient than the older models. Manufacturers have been working to make better cooling fans, resulting in blades that use less power. Hewlett Packard (HP), for example, has produced an efficient C-class blade to replace their older P-class blade. CIT is now procuring the more efficient C-Class blades, and customers will soon be deployed on these new servers, which will save power and space, resulting in lower costs.

Virtual servers

Virtual servers offer an even more significant savings in power and space. A virtual server is an operating system that runs on a server that hosts several other operating systems. Each operating system runs in isolation, allowing several systems to use the resources of the same hardware. The advantage is that the host server with virtual systems can run at a higher capacity, unlike dedicated servers, which often use only a fraction of their capacity.

The Windows group of the CIT Hosting Services Branch (HSB) is already offering virtualization on Windows servers, using VMWare as the middleman between the hardware and the operating system. (See “Introducing CIT’s Windows Virtual Server Service” in issue 238 of Interface).

With VMWare, 96 virtual servers can run on the hardware that would normally house only four dedicated servers. The NIH Data Center itself took advantage of this virtualization opportunity and moved its own application, Aperture, onto a virtual server. For more information about VMWare, see the Data Center website's VMWare page.

The Unix group of HSB is beginning an evaluation of a virtualized Solaris offering. Once the evaluation is complete, they will be contacting those customers who may be able to realize savings with the virtual servers. A future project will be to evaluate virtualized HP servers using an HP Itanium VM product.

Environmental improvements

Savings in power translate to a greener NIH Data Center. Selecting reduced-power solutions are the customer’s contribution to energy efficiency. On the NIH Data Center’s side of the equation, however, the more efficient and compact blade servers generate more heat in a smaller space, so the NIH Data Center has had to upgrade its cooling systems. To determine the best way to do this, the Data Center Operations Branch (DCOB) asked WFT Engineering, Inc., to evaluate all of their mechanical systems and make recommendations. Some of the changes that have been put in place as a result are:

Rearranging the servers:
Previously, server placement in the racks ignored which way the servers faced. Today, servers are placed in neatly ordered aisles with the backs of servers facing each other. Why? To help the Data Center’s air conditioning units (ACUs) work more efficiently.

Servers maintain workable temperatures by taking in cold air through the front and exhausting hot air through vents in the back. Placing the servers so that hot air exhausts face each other (and cold air intakes face each other), creates separate, alternating aisles of hot and cold air. This ensures that the hot and cold air do not mix, establishing a better air flow (hot air rises, cold air does not) and preventing the cold air from heating up before it reaches the servers’ front air intake. Areas of racks that don’t have equipment are blocked off with blank plates to prevent leakage of hot air into cold aisles.

Rearranging the floor tiles:
Perforated or vented tiles are placed in the cool aisles to channel the cooled air up through the floor and directly to the server intakes. To avoid wasting the cold air, no vented tiles are used in the hot aisles where the servers exhaust hot air. Because the holes in the floor tiles that let cables reach the servers also work like unintentional vents, the NIH Data Center staff has carefully blocked all such holes in the raised floor by placing custom-shaped foam blocks around the protruding cables. This prevents cooling inefficiencies as the cold air can’t seep out into hot aisles.

Upgrading the air conditioning units (ACUs):
Vintage ACUs from the 1970s were reaching the end of their life cycles, and, had become superseded by today’s more efficient models. The NIH Data Center is now replacing the old units with 20-ton Computer Room Air Conditioning (CRAC) units from Liebert.

The new units work in conjunction with the server and floor tile arrangement, as shown in illustration 1 below.

Cold air travels underneath the raised floor and up through vents in the cold aisles. Hot air is vented form the back of the servers, into the hot aisles, wqhere it rises up and circulates back to the CRAC units to be cooled.

Illustration 1: The CRAC units work with the hot aisle/cold aisle design to cool the Data Center efficiently

Environmental monitoring:
To make sure that the CRACs and the chilled water system are keeping the environment at optimal temperature and humidity, the NIH Data Center monitors them. Using WebCRTL, created by OEMCRTL and licensed to Lee Technologies, the NIH Data Center gets constant readouts from data picked up by sensors placed at strategic locations in rooms 1100, 2200, the customer service areas, and the penthouse electrical room.

A screenshot of a WebCTRL monitor of the main Data Center's first floor.

Illustration 2: Main Data Center 1st floor: WebCTRL color-codes the UPS and CRAC units to show whether they are operating properly, are in alarm, or have lost communication. It also displays the temperature at each monitor and the status of the three generators.

WebCTRL also monitors the generators, the leak detection equipment, the power usage levels, the hydrogen levels, and the fire detection panels.

Several monitors in the NIH Data Center display the current environmental conditions. In addition, operators can check the conditions from a remote computer, a Blackberry, or any other device that has web access. If one of the sensors detects a faulty condition, such as a leak or a temperature that is out of the acceptable range, it sends out an alarm.

Generator 1: WebCTRL tracks 21 conditions for each generator, including oil pressure, coolant level, battery voltage, fuel level, and water temperature.

Illustration 3: Generator 1: WebCTRL tracks 21 conditions for each generator, including oil pressure, coolant level, battery voltage, fuel level, and water temperature.

Federally mandated requirements

It is expected that the federal government will mandate greener equipment within the next few years. It has created an Electronic Product Environmental Assessment Tool (EPEAT) to guide consumers in selecting environmentally friendly equipment (see http://www.epeat.net). EPEAT helps purchasers evaluate computer equipment based on its environmental attributes. It provides 51 performance criteria for the design of products, divided into 8 categories, including materials selection, design for end of life, longevity, energy conservation, corporate performance, and packaging. Using the EPEAT criteria, customers will be able to make intelligent purchase decisions based on knowing exactly how green a product is.

More information

You can find more information on energy conservation and federal guidelines through the NIH Environmental Management System (http://www.nems.nih.gov) or visit the Environmental Protection Agency’s best practices website and data center partnership page to get you started.

You may also want to join the NIH Greenserve at GREENSERVE-L@LIST.NIH.GOV, (see https://list.nih.gov/archives/greenserve-l.html for the archives). To join, go to this website: https://list.nih.gov/cgi-bin/wa?SUBED1=greenserve-l&A=1 and register.

 
blank
Published by Center for Information Technology, National Institutes of Health
Accessibility | Disclaimers | Privacy Policy | FOIA | Office of Inspector General
 
CIT logo  NIH logo   HHS logo  USA Gov logo
NIH...Turning Discovery into Health