In 2018, former Gartner analyst Dave Cappuccio wrote a blog post titled “The Data Center is Dead,” predicting that 80% of organizations would shut down their traditional data centers by the mid 20s.

At the time, that was a reasonable call. More organizations were moving their workloads to the cloud and choosing SaaS solutions over on-premises applications. Fast-forward to the present, and it’s back to the future with many of those same workloads returning on-site in reaction to soaring cloud costs, stringent regulatory requirements, and the need for greater visibility and control. Instead of shrinking, many data centers are seeing growing demand.

But, building new data centers or expanding existing ones involves a new approach. In the last few years, design considerations have changed significantly. The adoption of high-performance computing (HPC) and AI applications translates into greater power consumption, and that requires a rethink of cooling and management. What’s more, it’s increasingly difficult to predict future capacity requirements.

Into the zone

It no longer makes sense to think of the data center as a monolithic facility housing uniform racks of IT equipment. As data centers become more complex, with diverse workloads requiring different design strategies, they’re increasingly being divided into zones optimized for specific workloads.

For example, HPC and AI deployments tend to be denser than general compute and storage. Rather than building out the entire data center to meet the power and cooling demands of HPC and AI, organizations are grouping those workloads in specific areas of their facilities. Similarly, zones can be configured based on uptime requirements.

Modular data center infrastructure can help facilitate zone-based deployments. Many people think of modular data centers as those deployed in ISO shipping containers, but that's only one type. There are also skid-mounted systems and preconfigured enclosures. Preconfigured enclosures can be shells or self-contained units with built-in power, cooling, fire suppression, and physical security.

The ABCs of ESG

Environmental, social, and governance (ESG) initiatives are also having a significant impact on data center design. Organizations are concerned about the environmental impacts of their business, and data centers are major power hogs, consuming as much as 2% of global power by some estimates.

Cooling is another primary concern. Traditional cooling systems, such as CRAC units, consume a lot of power, so some large data centers are moving to water cooling. However, water usage becomes problematic — in 2021, the average Google data center used 450,000 gallons of water daily. Free cooling is an environmentally friendly alternative. Data centers can filter and humidify naturally cool outside air and use mechanical refrigeration only when the ambient air is too hot.

Whether building out a new data center or expanding an existing one, organizations should choose sustainable materials. With smart choices, future data centers will be self-sufficient, carbon- and water-neutral, and have minimal impact on the local environment.

Planning is key

These challenges have upped the ante for data center design planning. It’s no longer advisable to build out a simple shell with a raised floor and start adding infrastructure. Your facility must have the necessary power capacity, redundancy, and security to meet your business needs. Anything less than that and you’re inviting trouble later.

Organizations need to develop a detailed plan before any work begins. Without proper planning, developers are bound to get snagged by obstacles when it comes to acquiring land and obtaining zoning permits. That only leads to delays and more money wasted on mapping and site elevations. As the famed UCLA basketball coach John Wooden would tell his players, failing to prepare is the equivalent of preparing to fail.