General guidelines for data centers

Use these general guidelines to set up your data center.

Refer to the latest ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated January, 2004. This document can be purchased online at ashrae.org. A dedicated section outlines a detailed procedure for assessing the overall cooling health of the data center and optimizing for maximum cooling.

Server and storage considerations

Most IBM® servers and storage products are designed to pull chilled air through the front of the server and exhaust hot air out of the back. The most important requirement is to ensure that the inlet air temperature to the front of the equipment does not exceed IBM environmental specifications. See the environmental requirements in the server specifications or hardware specification sheets. Make sure that the air inlet and exit areas are not blocked by paper, cables, or other obstructions. When upgrading or repairing your server, be sure not to exceed, if specified, the maximum allowed time for having the cover removed with the unit running. After your work is completed, be sure to reinstall all fans, heat sinks, air baffles, and other devices per IBM documentation.

Manufacturers, including IBM, are reporting heat loads in a format suggested by the ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated January, 2004. Although this data is meant to be used to for heat load balancing, care is required when using the data to balance cooling supply and demand as many applications are transient and do not dissipate constant rates of heat. A thorough understanding of how the equipment and application behave with regard to heat load, including considerations for future growth, is required.

Rack or cabinet considerations

Note: Racks are used throughout this section to also mean cabinets, frames, and any other commonly used term to identify the unit that houses rack-mounted equipment.

IBM Enterprise 19-inch racks are designed to allow maximum airflow through the equipment installed in the rack. Chilled air is pulled through the front and exhausted through the rear by the fans in the rack-mounted equipment. Most IBM racks come with a perforated rear door and an optional front door that is perforated. Some racks have optional acoustical treatment to reduce the noise emissions from the rack. If non-IBM racks are used, solid doors or doors with significant amounts of decorative glass are not recommended as these will not allow sufficient air to flow into and out of the rack.

Recirculation of hot air exiting the back of the rack into the front of the rack must be eliminated. There are two actions that can be taken to prevent air recirculation. First, filler or blanking panels must fill all unoccupied rack space that is not occupied by equipment shipped in rack. 1U and 3U filler panels are used to block air recirculation within the rack. If you do not have filler panels installed in your rack, these are available from IBM.

External components, such as a stand-alone USB DVD drive, can be placed outside of the system rack, either on an open design shelf or on an open design cart nearby. An open design means that the cart or shelf does not have any solid sides, which can impact the overall airflow near the external component, the system rack, or both.

Figure 1. 1U and 3U filler panel figure and part numbers
1U and 3U filler panel figure and part numbers
Table 1. Parts
Index number FRU part number Units per assembly Description
1 97H9754 As needed 1U Filler snap (black)
62X3443 As needed 1U Filler snap (white)
2 97H9755 As needed 3U Filler snap (black)
62X3444 As needed 3U Filler snap (white)
3 12J4072 As needed 1U Filler snap (black)
4 12J4073 As needed 3U Filler snap (black)
5 74F1823 2 per Item 3 M5 Nut clip
74F1823 4 per Item 4 M5 Nut clip
6 1624779 2 per Item 3 M5 X 14 Hex flange
1624779 4 per Item 4 M5 X 14 Hex flange

Second, allow proper operating clearance around all racks. See the clearance requirements in the server specifications or hardware specification sheets. The floor layout should not allow the hot air exhaust from the back of one rack to enter the front air inlet of another rack.

Finally, proper cable management is another important element of maximizing the airflow through the rack. Cables must be routed and tied down in such a way that they do not impede the movement of air into or out of the rack. Such impedance could significantly reduce the volumetric flow of air through the equipment.

Use a fan-assisted rack or cabinet with caution. Depending upon how much equipment is installed in the cabinet, the air movers in the cabinet may limit the amount of flow to less than what is required by the equipment.

Room considerations

Data centers designed and built in the last 10 years are typically capable of cooling up to 3KW of heat load per cabinet. These designs often involve raised floor air distribution plenums 18 to 24 inches in height, room ceiling heights of 8 to 9 feet, and Computer Room Air Conditioning (CRAC) units distributed around the perimeter of the room. IT equipment occupies roughly 30-35% of the total data center space. The remaining space is white space (for example, access aisles, service clearances), power distribution units (PDUs), and CRAC units. Until recently, little attention has been given to heat load assessments, equipment layout and air delivery paths, heat load distribution, and floor tile placement and openings.

Assessing the total heat load of your installation

A total heat load assessment should be conducted to determine your overall environment balance point. The purpose of the assessment is to see if you have enough sensible cooling, including redundancy, to handle the heat load that you plan to install or have installed. There are several ways to perform this assessment, but the most common is to review the heat load and cooling in logical sections defined by I-beams, airflow blockages, or CRAC unit locations.

Equipment layout and air delivery paths

The hot-aisle, cold-aisle arrangement that is explained in the ASHRAE publication, "Thermal Guidelines for Data Processing Environments", dated January, 2004, should be used. In the following figure, racks within the data center are arranged such that there are cold aisles and hot aisles. The cold aisle consists of perforated floor tiles separating two rows of racks. The chilled air from the perforated floor tiles is exhausted from the tiles and is drawn into the fronts of the racks. The inlets of each rack (front of each rack) face the cold aisle. This arrangement allows the hot air exhausting the rear of the racks to return to the CRAC units; thus, minimizing hot exhaust air from the rack circulating back into the inlets of the racks. CRAC units are placed at the end of the hot aisles to facilitate the return of the hot air to the CRAC unit and maximize static pressure to the cold aisle.

Figure 2. Hot aisle and cold aisle arrangement
Hot aisle and cold aisle arrangement

The key to heat load management of the data center is to provide inlet air temperatures to the rack that meet the manufacturer's specifications. Because the chilled air exhausting from the perforated tiles in the cold aisle may not satisfy the total chilled airflow required by the rack, additional flow will be drawn from other areas of the raised floor and may not be chilled. See the following figure. In many cases, the airflow drawn into the top of the rack, after the bottom of the rack has been satisfied, will be a mixture of hot air from the rear of the system and air from other areas. For those racks that are at the ends of a row, the hot airflow that exhausts from the rear of the rack and migrate to the front around the sides of the rack. These flow patterns have been observed in actual data centers and in flow modeling.

Figure 3. Possible rack airflow patterns
Possible rack airflow patterns

For a data center that may not have the best chilled-air-flow distribution, the following figure gives guidance in providing adequate chilled airflow given a specific heat load. The chart takes into account worst-case locations in a data center and are the requirements to meet the maximum temperature specifications required by most IBM high-end equipment. Altitude corrections are noted on the bottom portion of the chart.

Figure 4. High-end equipment chilled airflow and temperature requirements
High-end equipment chilled airflow and temperature requirements

The most common methods for delivering supply air to the racks can be found in System air distribution.

Heat load distribution

Increased performance capabilities and the accompanying heat load demands have caused data centers to have hot spots in the vicinity of heat loads that exceed 3KW. Facility owners are discovering that it is becoming increasingly difficult to plan cooling schemes for large-scale deployments of high-heat-load equipment. Essentially, two different approaches can be undertaken for a large-scale, high-end server or storage deployment:

  • Provide ample cooling for maximum heat load requirements across the entire data center.
  • Provide an average amount of cooling across the data center with the capability to increase cooling in limited, local areas.

Option 1 is very expensive and more conducive to new construction. For option 2, a number of things can be done to optimize cooling in existing data centers and possibly raise the cooling capability in limited sections.

One recommendation is to place floor tiles with high percent-open and flow ratings in front of the high-end racks. Another recommendation is to provide special means for removing hot exhaust air from the backs of the high-end racks immediately, before it has a chance to migrate back to the air intakes on racks in other parts of the room. This could be accomplished by installing special baffling or direct ducting back to the air returns on the CRAC units. Careful engineering is required to ensure that any recommendation does not have an adverse effect on the dynamics of the underfloor static pressure and airflow distribution.

In centers where floor space is not an issue, it would be most practical to design the entire raised floor to a constant level of cooling and depopulate racks or observe a greater distance between racks in order to meet the per-cabinet capability of the floor.

Floor tile placement and openings

Perforated tiles should be placed exclusively in the cold aisles, aligned with the intakes of the equipment. No perforated tiles should be placed in the hot aisles, no matter how uncomfortably hot. Hot aisles are, by design, supposed to be hot. Placement of open tiles in the hot aisle artificially decreases the return air temperature to the CRAC units, thereby reducing their efficiency and available capacity. This phenomenon contributes to hot spot problems in the data center. Perforated tiles should not be placed in too close proximity to the CRAC units. In areas under the raised floor where air velocities exceed about 530 feet-per-minute, usually within about six tiles of the unit discharges, a Venturi effect may be created where room air will be sucked downward into the raised floor, opposite of the desired result of upward chilled air delivery.

The volumetric flow capabilities of floor tiles with various percent-open ratings are shown in the following figure.

Figure 5. Volumetric flow capabilities of various raised floor tiles
Volumetric Flow Capabilities of Various Raised Floor Tiles

Floor tiles in typical data centers deliver between 100 and 300 cfm. By optimizing the flow utilizing some of the guidelines set forth in this document, it may be possible to realize flows as high as 500 cfm. Flow rates as high as 700-800 cfm per tile are possible with tiles with the highest percent-open rating. Floor tiles must be aligned in the cold aisles with the intake locations on the equipment.

Openings in the raised-floor that are not there for the purpose of delivering chilled air directly to the equipment in the data center space should be completely sealed with brush assemblies or other cable opening material (for example, foam sheeting, fire pillows). Other openings that must be sealed are holes in data center perimeter walls, underfloor, and ceiling. Sealing all openings will help maximize under-floor static pressure, ensure optimal airflow to the cold aisles where it is needed, and eliminate short-circuiting of unused air to the CRAC unit returns.