ΠΠ²ΡΠΎΡ: ASHRAE
Prior to the 2004 publication of the first edition of Thermal Guidelines for Data Processing Environments, there was no single source in the data center industry for information technology equipment (ITE) temperature and humidity requirements. This book established groundbreaking common design points endorsed by the major information technology original equipment manufacturers (IT OEMs). The second edition, published in 2008, created a new precedent by expanding the recommended temperature and humidity ranges. The third edition (2012) broke new ground through the addition of new data center environmental classes that enable near-full-time use of free-cooling techniques in most of the worldβs climates. This exciting development also brought increased complexity and trade-offs that required more careful evaluation in their application due to the potential impact on the ITE to be supported. The fourth edition (2015b) took further steps to increase the energy efficiency of data centers by reducing the requirements for humidification. ASHRAE funded the Electromagnetic Compatibility (EMC) Laboratory at the Missouri University of Science and Technology from 2011 to 2014 to investigate the risk of upsets or damage to electronics related to electrostatic discharge (ESD). The concerns raised prior to the study regarding the increase in ESD-induced risk with reduced humidity were not justified (Pommerenke et al. 2014).
This fifth edition of Thermal Guidelines is primarily focused on two major changesβone is a result of the ASHRAE-funded research project RP-1755 (Zhang et al. 2019a) on the effects of high relative humidity (RH) and gaseous pollutants on corrosion of ITE, and the second is the addition of a new environmental class for high-density equipment. ASHRAE funded the Syracuse University Mechanical and Aerospace Engineering Department from 2015 to 2018 to investigate the risk of operating data centers at higher levels of moisture when high levels of gaseous pollutants exist. The objective was to evaluate the ability to increase the recommended moisture level in support of reducing energy required by data centers. The changes made to the recommended envelope based on this research study are shown in Chapter 2, with the details for the basis of these changes reported in Appendix E. A new environmental class for high-density server equipment has also been added to accommodate highperformance equipment that cannot meet the requirements of the current environmental classes A1through A4. The fifth edition also changes the naming of the liquid cooling classes to represent maximum facility water temperatures.
A cornerstone idea carried over from previous editions of Thermal Guidelines is that inlet temperature is the only temperature that matters to ITE. Although there are reasons to want to consider the impact of equipment outlet temperature on the hot aisle, it does not impact the reliability or performance of the ITE. Also, each manufacturer balances design and performance requirements when determining their equipment design temperature rise. Data center operators should expect to understand the equipment inlet temperature distribution throughout their data centers and take steps to monitor these conditions. A facility designed to maximize efficiency by aggressively applying new operating ranges and techniques will require a complex, multivariable optimization performed by an experienced data center architect. Although the vast majority of data centers are air cooled at the IT load, liquid cooling is becoming more commonplace and likely will be adopted to a greater extent due to its enhanced operational efficiency, potential for increased density, and opportunity for heat recovery. Consequently, the fourth and fifth editions of Thermal Guidelines for Data Processing Environments include definitions of liquid-cooled environmental classes and descriptions of their applications. Even a primarily liquid-cooled data center may have air-cooled IT within. As a result, a combination of air-cooled and liquid-cooled classes will typically be specified for a given data center.
ΠΠΎΠΌΠΌΠ΅Π½ΡΠ°ΡΠΈΠΈ
ΠΠΎΠΉΠ΄ΠΈΡΠ΅ ΠΈΠ»ΠΈ Π·Π°ΡΠ΅Π³ΠΈΡΡΡΠΈΡΡΠΉΡΠ΅ΡΡ, ΡΡΠΎΠ±Ρ ΠΎΡΡΠ°Π²ΠΈΡΡ ΠΊΠΎΠΌΠΌΠ΅Π½ΡΠ°ΡΠΈΠΉ