There are dozens of data center cooling methods, however, multi-tenant, colocation data center operator Involta LLC believes its HVAC design team has developed one of the industry’s most efficient concepts.

Involta Northpointe, a recently-opened, 40,000-sq-ft data center in the Northpointe Industrial Park, Freeport PA, is already recording an impressive 1.3 power usage effectiveness (PUE), which places it in the top 5% of efficient multi-tenant data centers nationwide. The performance statistics haven’t gone unnoticed. Involta recently signed one of the nation’s top health care providers, University of Pittsburgh Medical Center (UPMC), as Northpointe’s anchor tenant.

Uptime Institute, the industry benchmark for certifying data centers for design, construction, management, and operations, issued Northpointe a Tier III Certification, which includes HVAC capabilities of cooling 725 kWh of critical heat load even during a power interruption.

Involta has continually strived for higher efficiencies. For example, Northpointe’s HVAC design is 53% more efficient, and uses half the energy of Involta’s first co-location opened in 2008.

The statistical performance leading up to Northpointe’s prototype didn’t occur overnight however, but is due rather to a series of progressive HVAC design modifications Involta’s design team have made constructing and retrofitting its 12 other colocations in Arizona, Pennsylvania, Ohio, Minnesota, Iowa, and Idaho comprising 300,000 sq ft. Innovations include developing data center-specific air dispersion, variable-frequency drives (VFD) on cooling systems, and supply/return air plenum designs.

The Involta team includes in-house designers chief security officer, Jeff Thorsteinson, and director of data center operations, Lucas Mistelske. The team also includes outsourced consultants, architects, and engineers: Jason Lindquist, P.E., associate at consulting engineering firm Erikson Ellison & Associates (EEA), New Brighton, MN; and Scott Friauf, president of general contractor Rinderknecht & Associates, Cedar Rapids; and fabric air dispersion manufacturer, DuctSox Corp., Dubuque, IA; Solum Lang Architects, Cedar Rapids.

Northpointe features a common industry methodology of computer room air conditioners (CRAC) that supply displacement ductwork runs centered above electronic rack cold aisles. However, that’s where the similarities stop.

“The data center industry has come to realize that strategic air dispersion, not more cooling volume, is the secret to effective rack cooling, facility efficiency, and minimal equipment failures,” said Thorsteinson.

Traditional metal ductwork in earlier Involta locations, whether recessed in ceilings or exposed over cold aisles, fell short of delivering efficient and effective cooling even though there was sufficient CRAC capacity and room temperatures as recommended by American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 90.4, “Energy Standard for Data Centers” and TC9.9, “Data Center Power Equipment Thermal Guideline and Best Practices.” The main shortcoming was metal duct’s inherent high velocities resulting in turbulences that prevented electronic equipment fans to draw cooling into the racks. The high velocities of 800 ft/min (FPM) and beyond also caused inefficient return air strategies.

Consequently, Involta collaborated with fabric duct manufacturer DuctSox to develop DataSox, an air dispersion duct that’s specifically aimed at solving air distribution challenges unique to data centers.

The design solved velocity, volume, and turbulent air dispersion issues. At Northpointe, it’s positioned over the cold aisle in double 36-in.-diameter, 36-ft. long runs. A majority of air is distributed through the fabric porosity consisting of micro perforations located on the bottom half of the round, static-free fabric.

There are also field-adjustable, directional nozzles running linearly down both sides at 5:30 and 6:30 o’clock that allow higher concentrations for hot spots. Lindquist also specified dampers for duct take-offs in the event a duct run is uninstalled for commercial laundering, reconfiguration, or adjustments. Generally, data center-specific fabric air dispersion is factory-designed for a particular project’s specifications. In the field, however, the nozzles can be throttled and redirected to eliminate any damper balancing commonly required in conventional ductwork projects. “This unique approach that the Involta team innovated in its recent data centers is very impressive, and according to our tests, has outperformed a lot of other HVAC concepts we’ve looked at,” said Lindquist, who has designed more than 12 data center mechanical systems.

 

ENERGY-SAVING STATS

The CRACs discharge 64°F air and the racks generally draw in 64°F to 67°F air. Return air temperatures to the CRACs’ return plenum ranges from 82°F to 95°F.

Cold aisle temperature uniformity in conventionally designed data centers can surpass a 10°F differential in conventional data center air distribution designs. However, Northpointe’s design records very slim cold aisle differentials of only two degrees from the top to bottom.

A precursor to this design, Involta’s Marion, Iowa-based colocation, was retrofitted from metal duct/conventional air handlers to data center-specific DataSox and CRACs with VFDs and other enhancements. The HVAC retrofit reduced energy usage by 80,000-kW/hr. monthly.

Northpointe’s mechanical room configuration designed by Involta’s longtime general contractor, Rinderknecht, and installed by local mechanical contractor McKamish, Pittsburgh, innovatively splits the data center into 200-rack and 180-rack halls. In the centrally-located mechanical room, each bank of ten 24-ton DA085 upflow CRACs by Vertiv, Columbus, OH, is positioned along the wall of the room it supplies. For example, the 200-rack room is anchored by 10 CRACs supplying approximately 13,000-total CFMs controlled by VFDs. Each CRAC offers redundant refrigerant circuits and fans. The CRACs’ two-stage scroll compressors switch to free cooling when outdoor ambient temperatures drop to 54°F or less. The CRACs reject heat to rooftop high-efficiency micro-channel condensers.

The VFDs operate the CRACs at 20% to 40% capacity; however the i-Vu building automation system (BAS) by Carrier Corp., Syracuse, NY, can call for more in high humidity situations. “Running at these lower fan speeds obviously saves us a lot of energy,” said Thorsteinson.

Rinderknecht’s energy-efficient building envelope consists of a structural steel and metal stud-framed frontend construction for offices, storage, and other non-data rooms HVAC supplied by Voyager Series rooftop systems by Carrier. The data halls are constructed of tornado-proof, 12-in.-thick, pre-cast concrete cores. Roof R-value insulation averages approximately R-36 and far surpasses ASHRAE 90.1 building energy code standards and adds to the facility’s total energy savings.

Rinderknecht also designed a supply plenum and separate return air plenum that connect to each data hall’s bank of CRACs. The return air collection of taking rising warm air and delivering it to a plenum arrangement the CRACs share is an innovation Rinderknecht designed.

 

UPTIME INSTITUTE CERTIFICATION

The HVAC section of Uptime certification wasn’t easy. Uptime Institute certifiers had never before seen such a plenum arrangement and air delivery system. Therefore, they required unusual data from EEA, such as calculations on the mechanical spine pressurization, or unprecedented worst case scenarios of extreme pressurization, airflow, and temperature events. “They were initially quite skeptical of our HVAC approach and required test data that was well beyond typical certification requirements, but ultimately we proved the energy efficiency, airflow uniformity, and performance claims,” said Thorsteinson.

Rinderknecht was also proactive in helping Involta obtain utility rebates for LED lighting, lighting controls, BAS controls, uninterrupted power supplies (UPS), static transfer switches, direct current circuits, Energy Star-rated transformers, and a host of other gear. 

Besides Uptime Institute certification and energy efficiency, potential customers are wowed by the visual impact the unique air dispersion makes when touring an Involta facility. “Their (Datasox) unique appearance always prompts questions, which is always a good thing,” said Thorsteinson. “Afterward they typically view them as innovative and smart.”

DATA CENTER IoT-BASED THERMAL OPTIMIZATION

FIGURE 1. Typical IoT architecture.

 

The Internet of Things (IoT) is gaining traction across many industries including data centers. vXchnge, a provider of carrier-neutral colocation services, actively leverages innovative solutions and engineering best practices to achieve efficient and sustainable operations. Within that focus, vXchnge and Vigilent, a provider of dynamic cooling management systems, together engaged in a pilot project at vXchnge’s Chappaqua, NY, facility to improve thermal management and reduce energy spend. This case study describes how an IoT approach was used to manage the existing, multi-vendor cooling infrastructure within the North American colocation data center and highlights the results.

 

CHALLENGE

Data centers across the globe are struggling to free up resources to meet client cooling demands. In order to do this, more control and access to data for optimization of airflow is needed. Currently, legacy design standards provide more cooling than needed and airflow complexity and IT variability make it difficult to manually optimize with dynamic changes. This results in wasted energy, lost capacity, and hidden thermal risks. The challenge was to improve efficiency in legacy data centers with mixed, multi-generational equipment and to optimize operations through data-driven insights tied to a data center infrastructure management (DCIM) system. vXchnge’s Chappaqua, NY, facility was selected for the pilot as it houses legacy and varied vendor equipment.

 

GOALS

The main goal was to optimize operations. To achieve the overall goal, a smaller subset of goals was established. These include: adding a layer of intelligence and automated control to reduce manual intervention, enhancing the ability to deliver a guaranteed 100% uptime for SLAs, achieving quantifiable efficiency gains and an attractive ROI, and providing real-time visibility throughout the cooling infrastructure.

 

SOLUTION

vXchnge and Vigilent deployed an IoT-based approach to automatically optimize cooling for energy reduction and improved thermal management. This solution was selected ultimately to improve cooling capabilities within the data center. Vigilent has a solution that utilizes machine learning, allowing the system to continuously get smarter. The system exceeded expectations with having a “guard mode” to activate cooling in the event of any system failures or when temperatures exceed a certain threshold. The “guard mode” brings added protection to the data center until the mechanical system returns to normal operation, all driven through the automation and learning algorithms.

The IoT system is composed of a wireless mesh network of hundreds of sensors and controllers driven by machine learning software. The Vigilent system leverages the sensor network automatically creating a real-time model of the facility’s thermal environment by mapping airflow and determining the precise cooling influence of every unit, both individually and collectively, at every spot across the data center. The system then takes dynamic control of the cooling units — turning them on and off, and ramping fan speeds up and down — to meet pre-specified temperature SLAs in the most efficient manner possible.

Automatic thermal optimization through predictive control measures heat load and cooling equipment efficiently and models cooling airflow influence. Using machine learning algorithms, the IoT-based system learns the effects of control actions and manipulates the cooling equipment by itself without staff intervention.

 

TESTING AND RESULTS

The testing at the Chappaqua facility was conducted over a six-month timeframe and led to actual savings exceeding the projected numbers in each site, as well as continuous savings, even as the environments change, and with intelligent cooling control with equipment of mixed brands and different generations of technology. Specific insights into the thermal management systems were uncovered and include exactly where cooling was delivered across the white space, where new IT capacity could be deployed, where additional cooling may be needed to provide sufficient capacity and redundancy, and which cooling units may be underperforming.

The testing also delivered some unexpected advantages in the overall thermal management system, dynamically matching cooling capacity to the actual load eliminating the need for manual tuning and adjustments. Using analytics helps to more easily identify potential issues, automatically resolving hot and cold spots, directly impacting facility staff productivity. With automatic closed loop thermal control, staff is no longer manually managing the thermal environment, giving them more time to engage in proactive management and customer service.

Furthermore, non-energy related benefits were also discovered. These manifested into easier capacity planning, increased visibility, and a more stable environment. Because of the Chappaqua facility’s success, the IoT-based approach for thermal management optimization was implemented in two additional vXchnge data center facilities with similar results.Chappaqua project results

FIGURE 2. Chappaqua project results.

FIGURE 3. Chart of similar results in two additional locations.

 

Ali Marashi, senior vice president of engineering, chief technology officer, vXchnge, is responsible for all engineering, construction, network, and information technology functions for the company. Ali brings over 20 years of experience in the development and support of engineering and IT systems.

Prior to joining vXchnge, Ali served as vice president of IBX Ops Engineering for Equinix. There he led the design of data center architecture and all critical infrastructure and control systems for North America. Prior to joining Equinix, Ali served as the chief information officer for Switch & Data where he directed all engineering and IT organizations. Prior to Switch & Data, Ali served as the chief technology officer for Internap. There he helped define the corporate vision and lead the technology engineering and product strategy for the company.

 

Since its founding in 2008, Involta LLC, Cedar Rapids, IA, has continually looked for the most efficient and effective method of cooling for their network of data centers.

For example, its Marion, IA, data center was built with conventional rooftop air handlers and metal ductwork that dispersed air from ceiling registers. A retrofit changed to metal drops supplying traditional porous fabric ductwork typically seen in open architectural applications such as retail stores and gymnasiums.

A second retrofit was the foundation of methods Involta is using in new construction and retrofits today, such as VFDs and data center-specific DataSox air dispersion Involta collaboratively designed with fabric duct manufacturer DuctSox, Turbulent, IA. The newest location, Involta Northpointe, Freeport, PA, uses DataSox and VFDs on CRACs with free cooling options and micro-channel condensers. The CRACs are connected with separate supply air and return air plenums.

The result is a 1.3 PUE, which puts the Northpointe location in the top five percentile of efficient multi-tenant, colocation data centers nationwide. 

Back To Top