General guidelines for data centers

Use these general guidelines to set up your data center.

Refer to the latest ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated 2011. This document can be purchased online at https://www.ashrae.org/. A dedicated section outlines a detailed procedure for assessing the overall cooling health of the data center and optimizing for maximum cooling.

System and storage considerations

Most systems and storage products are designed to pull chilled air through the front of the system and exhaust hot air out of the back. The most important requirement is to ensure that the inlet air temperature to the front of the equipment does not exceed IBM® environmental specifications. See the environmental requirements in the system specifications or hardware specification sheets. Make sure that the air inlet and exit areas are not blocked by paper, cables, or other obstructions. When upgrading or repairing your system, be sure not to exceed, if specified, the maximum allowed time for having the cover removed with the unit running. After your work is completed, be sure to reinstall all fans, heat sinks, air baffles, and other devices according to the documentation.

Manufacturers, including IBM, are reporting heat loads in a format suggested by the ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated 2011. Although this data is meant to be used to for heat load balancing, care is required when using the data to balance cooling supply and demand as many applications are transient and do not dissipate constant rates of heat. A thorough understanding of how the equipment and application behave with regard to heat load, including considerations for future growth, is required.

Room considerations

Data centers designed and built in the last 10 years are typically capable of cooling up to 3KW of heat load per cabinet. These designs often involve raised floor air distribution plenums 18 to 24 inches in height, room ceiling heights of 8 to 9 feet, and Computer Room Air Conditioning (CRAC) units distributed around the perimeter of the room. IT equipment occupies roughly 30-35% of the total data center space. The remaining space is white space (for example, access aisles, service clearances), power distribution units (PDUs), and CRAC units. Until recently, little attention has been given to heat load assessments, equipment layout and air delivery paths, heat load distribution, and floor tile placement and openings.

Assessing the total heat load of your installation

A total heat load assessment should be conducted to determine your overall environment balance point. The purpose of the assessment is to see if you have enough sensible cooling, including redundancy, to handle the heat load that you plan to install or have installed. There are several ways to perform this assessment, but the most common is to review the heat load and cooling in logical sections defined by I-beams, airflow blockages, or CRAC unit locations.

Equipment layout and air delivery paths

The hot-aisle, cold-aisle arrangement that is explained in the ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated 2011, should be used. In the following figure, racks within the data center are arranged such that there are cold aisles and hot aisles. The cold aisle consists of perforated floor tiles separating two rows of racks. The chilled air from the perforated floor tiles is exhausted from the tiles and is drawn into the fronts of the racks. The inlets of each rack (front of each rack) face the cold aisle. This arrangement allows the hot air exhausting the rear of the racks to return to the CRAC units; thus, minimizing hot exhaust air from the rack circulating back into the inlets of the racks. CRAC units are placed at the end of the hot aisles to facilitate the return of the hot air to the CRAC unit and maximize static pressure to the cold aisle.

Figure 1. Hot aisle and cold aisle arrangement
Hot aisle and cold aisle arrangement

The key to heat load management of the data center is to provide inlet air temperatures to the rack that meet the manufacturer's specifications. Because the chilled air exhausting from the perforated tiles in the cold aisle may not satisfy the total chilled airflow required by the rack, additional flow will be drawn from other areas of the raised floor and may not be chilled. See the following figure. In many cases, the airflow drawn into the top of the rack, after the bottom of the rack has been satisfied, will be a mixture of hot air from the rear of the system and air from other areas. For those racks that are at the ends of a row, the hot airflow that exhausts from the rear of the rack and migrate to the front around the sides of the rack. These flow patterns have been observed in actual data centers and in flow modeling.

Figure 2. Possible rack airflow patterns
Possible rack airflow patterns

For a data center that may not have the best chilled-air-flow distribution, the following figure gives guidance in providing adequate chilled airflow given a specific heat load. The chart takes into account worst-case locations in a data center and are the requirements to meet the maximum temperature specifications required by most IBM high-end equipment. Altitude corrections are noted in the lower portion of the chart.

Figure 3. High-end equipment chilled airflow and temperature requirements
High-end equipment chilled airflow and temperature requirements

Heat load distribution

Increased performance capabilities and the accompanying heat load demands have caused data centers to have hot spots in the vicinity of heat loads that exceed 3KW. Facility owners are discovering that it is becoming increasingly difficult to plan cooling schemes for large-scale deployments of high-heat-load equipment. Essentially, two different approaches can be undertaken for a large-scale, high-end system or storage deployment:

  • Provide ample cooling for maximum heat load requirements across the entire data center.
  • Provide an average amount of cooling across the data center with the capability to increase cooling in limited, local areas.

Option 1 is very expensive and more conducive to new construction. For option 2, a number of things can be done to optimize cooling in existing data centers and possibly raise the cooling capability in limited sections.

One recommendation is to place floor tiles with high percent-open and flow ratings in front of the high-end racks. Another recommendation is to provide special means for removing hot exhaust air from the backs of the high-end racks immediately, before it has a chance to migrate back to the air intakes on racks in other parts of the room. This could be accomplished by installing special baffling or direct ducting back to the air returns on the CRAC units. Careful engineering is required to ensure that any recommendation does not have an adverse effect on the dynamics of the underfloor static pressure and airflow distribution.

In centers where floor space is not an issue, it would be most practical to design the entire raised floor to a constant level of cooling and depopulate racks or observe a greater distance between racks in order to meet the per-cabinet capability of the floor.

Floor tile placement and openings

Perforated tiles should be placed exclusively in the cold aisles, aligned with the intakes of the equipment. No perforated tiles should be placed in the hot aisles, no matter how uncomfortably hot. Hot aisles are, by design, supposed to be hot. Placement of open tiles in the hot aisle artificially decreases the return air temperature to the CRAC units, thereby reducing their efficiency and available capacity. This phenomenon contributes to hot spot problems in the data center. Perforated tiles should not be placed in too close proximity to the CRAC units. In areas under the raised floor where air velocities exceed about 530 feet-per-minute, usually within about six tiles of the unit discharges, a Venturi effect may be created where room air will be sucked downward into the raised floor, opposite of the preferred result of upward chilled air delivery.

The volumetric flow capabilities of floor tiles with various percent-open ratings are shown in the following figure.

Figure 4. Volumetric flow capabilities of various raised floor tiles
Volumetric Flow Capabilities of Various Raised Floor Tiles

Floor tiles in typical data centers deliver between 100 and 300 cfm. By optimizing the flow characteristics, it might be possible to realize flows as high as 500 cfm. Flow rates as high as 700-800 cfm per tile are possible with tiles with the highest percent-open rating. Floor tiles must be aligned in the cold aisles with the intake locations on the equipment.

Openings in the raised-floor that are not there for the purpose of delivering chilled air directly to the equipment in the data center space should be completely sealed with brush assemblies or other cable opening material (for example, foam sheeting, fire pillows). Other openings that must be sealed are holes in data center perimeter walls, underfloor, and ceiling. Sealing all openings will help maximize under-floor static pressure, ensure optimal airflow to the cold aisles where it is needed, and eliminate short-circuiting of unused air to the CRAC unit returns.