Use these general guidelines to set up your data center.
See the latest ASHRAE publication, Thermal Guidelines for Data Processing Environments, dated January 2004. This document can be purchased online at ashrae.org. A dedicated section outlines a detailed procedure for assessing the overall cooling health of the data center and optimizing for maximum cooling.
Most IBM® systems and storage products are designed to pull chilled air through the front of the system and exhaust hot air out of the back. The most important requirement is to ensure that the inlet air temperature to the front of the equipment does not exceed IBM environmental specifications. See the environmental requirements in the system specifications or hardware specification sheets. Make sure that the air inlet and exit areas are not blocked by paper, cables, or other obstructions. When upgrading or repairing your system, be sure not to exceed, if specified, the maximum allowed time for having the cover removed while the unit is running. After your work is completed, be sure to reinstall all fans, heat sinks, air baffles, and other devices per IBM documentation.
Manufacturers, including IBM, are reporting heat loads in a format suggested by the ASHRAE publication, Thermal Guidelines for Data Processing Environments, dated January 2004. Although this data is meant to be used to for heat load balancing, care is required when using the data to balance cooling supply and demand as many applications are transient and do not dissipate constant rates of heat. A thorough understanding of how the equipment and application behave regarding heat load, including considerations for future growth, is required.
Data centers designed and built in the last 10 years are typically capable of cooling up to 3KW of heat load per cabinet. These designs often involve raised floor air distribution plenums 18 - 24 inches in height, room ceiling heights of 8 - 9 feet, and Computer Room Air Conditioning (CRAC) units distributed around the perimeter of the room. IT equipment occupies roughly 30 - 35% of the total data center space. The remaining space is white space (for example, access aisles, service clearances), power distribution units (PDUs), and CRAC units. Until recently, little attention was given to heat load assessments, equipment layout and air delivery paths, heat load distribution, and floor tile placement and openings.
A total heat load assessment should be conducted to determine your overall environment balance point. The purpose of the assessment is to see if you have enough sensible cooling, including redundancy, to handle the heat load that you plan to install or already installed. There are several ways to conduct this assessment, but the most common is to review the heat load and cooling in logical sections defined by I-beams, airflow blockages, or CRAC unit locations.
The hot-aisle, cold-aisle arrangement that is explained in the ASHRAE publication, Thermal Guidelines for Data Processing Environments, dated January 2004, should be used. In Figure 1, racks within the data center are arranged such that there are cold aisles and hot aisles. The cold aisle consists of perforated floor tiles that separate two rows of racks. The chilled air from the perforated floor tiles is exhausted from the tiles and is drawn into the fronts of the racks. The inlets of each rack (front of each rack) face the cold aisle. This arrangement allows the hot air to exhaust from the rear of the racks to return to the CRAC units; thus, minimizing hot exhaust air from the rack to circulate back into the inlets of the racks. CRAC units are placed at the end of the hot aisles to facilitate the return of the hot air to the CRAC unit and maximize static pressure to the cold aisle.

The key to heat load management of the data center is to provide inlet air temperatures to the rack that meet the specifications set by the manufacturer. Because the chilled air that exhausts from the perforated tiles in the cold aisle might not satisfy the total chilled airflow required by the rack, additional flow can be from other areas of the raised floor and might not be chilled. See Figure 2. In many cases, the airflow drawn into the top of the rack, after the bottom of the rack is satisfied, will be a mixture of hot air from the rear of the system and air from other areas. For those racks that are at the ends of a row, the hot airflow that exhausts from the rear of the rack and migrate to the front around the sides of the rack. These flow patterns have been observed in actual data centers and in flow modeling.

For a data center that might not have the best chilled-air-flow distribution, Figure 3 gives guidance in providing adequate chilled airflow given a specific heat load. The chart takes into account worst-case locations in a data center and is the requirements to meet the maximum temperature specifications required by most IBM high-end equipment. Altitude corrections are noted on the chart.

The most common methods for delivering supply air to the racks can be found in System air distribution.
Increased performance capabilities and the accompanying heat load demands cause data centers to have hot spots in the vicinity of heat loads that exceed 3KW. Facility owners are discovering that it is becoming increasingly difficult to plan cooling schemes for large-scale deployments of high-heat-load equipment. Essentially, two different approaches can be undertaken for a large-scale, high-end system or storage deployment:
Option 1 is expensive and more conducive to new construction. For option 2, a number of things can be done to optimize cooling in existing data centers and possibly raise the cooling capability in limited sections.
One recommendation is to place floor tiles with high percent-open and flow ratings in front of the high-end racks. Another recommendation is to provide special means for removing hot exhaust air from the backs of the high-end racks immediately, before it has a chance to migrate back to the air intakes on racks in other parts of the room. This can be accomplished by installing special baffling or direct ducting back to the air returns on the CRAC units. Careful engineering is required to ensure that any recommendation does not have an adverse effect on the dynamics of the underfloor static pressure and airflow distribution.
In centers where floor space is not an issue, it would be most practical to design the entire raised floor to a constant level of cooling and depopulate racks or observe a greater distance between racks in order to meet the per-cabinet capability of the floor.
Perforated tiles should be placed exclusively in the cold aisles, aligned with the intakes of the equipment. No perforated tiles should be placed in the hot aisles, no matter how uncomfortably hot. Hot aisles are, by design, supposed to be hot. Placement of open tiles in the hot aisle artificially decreases the return air temperature to the CRAC units, reducing their efficiency and available capacity. This phenomenon contributes to hot spot problems in the data center. Perforated tiles should not be placed in too close proximity to the CRAC units. In areas under the raised floor where air velocities exceed about 530 feet-per-minute, usually within about six tiles of the unit discharges, a Venturi effect might be created where room air is sucked downward into the raised floor, opposite of the wanted result of upward chilled air delivery.
The volumetric flow capabilities of floor tiles with various percent-open ratings are shown in Figure 4.

Floor tiles in typical data centers deliver 100 - 300 cfm. By optimizing the flow that uses some of the guidelines presented in this document, it might be possible to realize flows as high as 500 cfm. Flow rates as high as 700-800 cfm per tile are possible with tiles with the highest percent-open rating. Floor tiles must be aligned in the cold aisles with the intake locations on the equipment.
Openings in the raised-floor that are not there for delivering chilled air directly to the equipment in the data center space need to be sealed with brush assemblies or other cable opening material (for example, foam sheeting, fire pillows). Other openings that must be sealed are holes in data center perimeter walls, underfloor, and ceiling. Sealing all openings helps to maximize under-floor static pressure, ensure optimal airflow to the cold aisles where it is needed, and eliminate short-circuiting of unused air to the CRAC unit returns.