By now, if you have not already seen the impact on your data centers cooling systems from the dog days of the summer’s heat and humidity, you most likely have a data center that is perfectly designed to meet the worst case ambient conditions with balanced airflow and enough cooling capacity, or you have a highly overcooled site with lots of excess cooling capacity (or perhaps you are simply located in Utopia).

While I am not trying to make this a commentary on global warming, it would seem that weather patterns are changing and more areas are experiencing “unforeseen” extreme weather patterns and events, such as Hurricane Sandy, which can impact data centers now and in the future.

Of course, if you are lucky enough to have a cooling system designed to handle the data center design heat load even when the external temperatures exceed 100°, not every site is designed for those ambient temperatures (as I write this in mid-July, Phoenix has just gone through a full week of above 100° days and has seen a record high of 119°F). In some cases, data centers that are located in an area that does not normally expect to see these extreme temperatures have run into problems, especially if their design specifications were based on sub-100° requirements (in most cases to save costs).  While data centers in Phoenix should expect to operate well above 100°, last summer the New York City area saw several days in a row of over 100° and may yet see new high temperature records by the time you read this.

Almost every summer, I see some older (and some not so old) sites that are impacted by temperature and humidity conditions beyond their original design specifications, as well as others that are just getting by and are worried about the next, perhaps hotter, heat wave.  Most cooling systems’ external heat rejection performance falls-off as ambient temperatures rise, which causes a de-rating of available system cooling capacity. However, many times the decision to design for extreme conditions out of the “normal” local temperature ranges is an added expense that is sometimes avoided and normally has no impact during most of the year (until those infrequent yet critically high temperatures occur).  

However, the problem can even begin to manifest itself even at “normal” summer temperatures such as 85° to 90°.  Most air cooled system capacity ratings are listed at 95° or 105° ambient. So you would not expect to see any capacity de-rating problems during an 85° day. Unfortunately, even though the official weather temperature is 85°, the units are typically located on a flat rooftop surrounded by dark roofing asphalt, which is being heated by the sun (this is very common in areas that have cold winters). I have seen 125° to 135° on rooftop surfaces many times — even on an 85° day.  This means that the actual air entering the rooftop cooling system (or ground level units sitting on blacktop) may be 110° to 120°F or higher, during a “normal” day and could reach 135° or more during an +100° day. Moreover, in a dedicated data center with a dark flat roof, the solar heat load must also be accounted for in the total heat load (note: while roof-mounted PV solar energy can only produce a small fraction of a data center’s energy requirements, a sometimes overlooked indirect benefit is peak solar heat load reduction which can add up to 100 watts per square foot).

Of course humidity comes into play as well, affecting some types of systems more than others. In particular, those which are based on evaporative cooling lose some of their effectiveness and net capacity in high external humidity. Air cooled systems (e.g., CRAC with an external refrigerant condenser or glycol-based dry coolers or air cooled chillers) exterior systems are not affected by humidity, but are directly impacted by the actual temperature of the air.

In addition, humidity has a performance impact on virtually all systems inside the data center, especially in a traditional closed air loop cooling system (i.e., CRAC/CRAH), as high humidity begins to infiltrate into the data center. It takes quite a bit more cooling total capacity to dehumidify (i.e., “latent” cooling to remove the moisture), than just cooling only the actual IT heat load (sensible heat Btu load). 

Of course, this humidity induced “extra” load occurs while the cooling system total capacity has been reduced because of the extreme external conditions as well, resulting in even less available cooling capacity for the direct IT heat load. The result? Data center internal temperatures rise even when the compressors remain on 100% of the time during those hottest or most humid hours of the day.  Also very high humidity has even greater impact on “free cooling” economizer systems, and both airside and waterside systems, and typically requires switching back to “back-up” mechanical cooling, especially in the case of fresh air, free cooling systems.

Traditionally, most cooling systems in conservatively designed major data centers have been oversized to compensate for the de-rating required to be able to operate in these extreme ambient conditions. However, this traditional practice of total system overcapacity can impact energy efficiency under normal conditions, as well as add to initial cost.

Going forward, I would urge that we consider designing the cooling systems in new data centers to allow for higher external ambient ranges, but not by just “oversizing” the entire mechanical cooling capacity. Ideally for a chiller-based system, the primary mechanical cooling compressors could be modular and staged so they can operate efficiently to meet the actual internal heat load. Consider only “oversizing” the external heat exchange components (and make them modular and staged as well).  In a CRAC-based solution, you would only need to upsize the external heat rejection system and use variable-speed fans, not the rating of the CRAC compressor (with CRACs with closed loop glycol systems, we only need to oversize the fluid coolers). This will only modestly increase initial costs, however it will still allow energy efficient operation during the typically lower temperatures during the rest of the year, while still being able to handle the highest external temperature and humidity conditions.

In an emergency, some sites keep lawn sprinklers ready to wet down the coils in “dry” fan deck systems.  Of course this should be done sparingly, since it does accelerate corrosion of the coils. Some third-party vendors are even offering evaporative add-on kits for air cooled systems to allow units to operate during high temperatures (and also to help operate more efficiently during the warmer summer season). Of course, as mentioned previously, this can cause corrosive damage to the standard coils, so consider this before installing or extensive use. More recently, even some of the cooling system OEMs are beginning to offer these as enhancements (with coils that have a protective coating).

Evaporative-based cooling systems that primarily use large volumes of water are being re-evaluated by some data center designers, since in the long term (and in some cases, near term), water is becoming a more expensive and, perhaps in some areas, a constrained resource. Water has an energy cost and energy value as well. This was finally identified in 2011 by The Green Grid, which has created the water usage effectiveness (WUE) metric. 

This information may be useful for your next design, however if you are currently stuck in a marginally cooled data center with no easy options to upgrade your cooling system and are just looking to keep from having a total meltdown, these 12 summer tips I posted last year might help at: http://www.missioncriticalmagazine.com/blogs/14-the-mission-critical-blog/post/85078-hot-aisle-insight—12-summer-cooling-tips-for-your-data-center.

THE BOTTOM LINE

So while I am a very vocal advocate to taking advantage of higher allowable temperatures in ASHRAE’s 2011 “Expanded Thermal Guidelines for IT equipment” in the data center, when faced with the choice of saving some initial cost on a lower rated cooling system based on “expected” ambient temperatures or the cost of emergency critical load shedding or total loss of cooling from thermal overload and compress overheat, the old adage comes to mind “pay a little more now or pay a lot more later.”  However, as discussed, consider adding the reserve to the high temperature performance of the exterior heat rejection systems, without necessarily oversizing everything.

At present, there is no off-the-shelf or easy solution to the disparity between the actual solar-heated air intake airflow temperatures and the presumed “ambient” temperature. Experienced cooling system designers are aware of these issues and typically try to oversize the external heat transfer capacity to compensate for the de-rated capacity at these elevated temperatures. However, as the data center industry is slowly becoming more aware of these issues, and hopefully more motivated to agree to a holistic approach to these challenges, then we will be able to improve our energy efficiency and also avoid singing “There ain’t no cure for the summertime blues,” from the Summertime Blues (with credit and apologies to Eddie Cochran).