At its core a data center is an aggregation of standard components that are implemented via a specific set of defined criteria to deliver the power and cooling needed to support the varied applications that will run above its raised floor. The build-out of a facility is a replicable process that benefits from a high level of standardization to maximize the efficiencies of its components much like the building of a server or even a car. Naturally, this then begs the question: “When it comes to data centers, why do people keep trying to reinvent the wheel?” The best way to answer this question is to address some mistaken assumptions that lead companies to start data center projects with a blank slate.

1. “We need “x” watts/square foot”

If a company expresses its data-center power requirements in terms of watts per square foot, it is already starting down the wrong path. Square feet or watts per square foot are not the way to determine data-center power requirements. KW of IT load is the most accurate unit of measurement for data center capacity. Understanding the importance of kilowatts is essential for any company that intends to expand its operations over time. KW of IT load is the required amount of electrical power delivered to drive the facility’s computing devices (servers, for example). As a measure of power and capacity, kW most directly translates into the environment required to support the volume of MIPS and terabytes that the data center is to deliver both now, and in the future. More directly, computers do not run on oil or wind. They run on electricity, both today and 20 years from now. There is no functional obsolescence of kW of IT load. As each successive generation of computing and storage gets more efficient per kilowatt, the data center gets more MIPS/terabytes out of the same level of kW of IT load.

The kW of IT method of determining and expressing data center needs forces a company to view its power requirements based on the footprint of all the components that will reside above the raised floor. The measure of kW of IT load provides a counter-balance to the tendency to overestimate power needs due to assuming all components require the same level of power. For example, a patch panel obviously does not require the same level of power as a server. Using kW of IT load as the standard unit of quantification enables both the end-user company and its supplier to be sure that decisions are based upon a cost/kW common base of understanding.

2. “I know the best way to build my data centers”

Actually, companies usually don’t. Assuming that the average life of a data center is 15 to 20 years, even the biggest companies build a new data center only every three to five years. Innovations in areas like power efficiency, equipment, cooling methodologies, and design and construction practices appear on a slightly faster schedule than once every half a decade. Therefore it’s difficult, if not impossible, for occasional builders of data centers to stay current. Compounding the problem, personnel in most companies focus their efforts on raised-floor applications. Therefore, they have little experience in areas like site acquisition, design and construction, or operations. Thus, most companies can’t build a data center as fast, effectively, or efficiently as a third-party provider that builds out megawatt-sized data centers every year. The key requirement is that a business person must be leading the team of experts in the required supporting disciplines.



The timeline for data center construction can be long and drawn out. Delays in site selection and design can add more than a year to the expected timeline.

3. “My data center requirements are unique”

Are they? While there are certainly those who require items like walls the thickness of missile hardened silos, the vast majority of companies seeking new data center facilities require the same basic deliverable. It’s not a company’s data-center infrastructure requirements that are unique, it is the applications that the company intends to run. The degree of mission criticality of these applications (and today, what application isn’t mission critical?) is what should drive the requirements for power and cooling. Answering this question helps a company determine its risk tolerance, which will define the parameters for defining and implementing its cooling and power requirements.

4. “Building my data center will require ‘special’ processes”

Unless someone has developed a new way to weld a pipe or wire down lug nuts, this probably isn’t the case. In fact, the biggest drawback for companies that opt to “do it themselves” is that they have no documented standards. The lack of formal standards results in every aspect of the design, construction, and on-going operations being plagued by delays associated with never-ending engineering and design-cost decisions. Digital Realty Trust has built hundreds of data centers all over the world, so all of our processes have been field tested and fully documented. Thus, Digital Realty Trust is able to consistently use them across projects to both accelerate delivery (our average time from commencement of a project to customer turnover is 26 weeks) and cost effectiveness. This is opposed to the average timeframe of 18-24 months required by an individual firm to build out a datacenter.

5. “Building a data center requires me to address a lot of technical questions”

Despite the fact that data centers are a standard infrastructure component, no one will argue that there are a number of technical considerations that must be considered during the facility’s design and construction. Upon its completion, there will be operational issues to be addressed. However, if a company is looking at the build-out of its new data center as the end result of a myriad of technical choices, then that company is viewing the entire project from the wrong perspective. While there are a variety of decisions during the endeavor, they are not technical in nature-they are all actually business decisions. A failure to view data center-related decisions from a business perspective may result in a site that is poorly planned, over-designed, over the construction budget, and/or inefficiently operated.

Achieving key data center operational objectives must be based on an understanding of the relationship between the elements of design, cost, operations, and risk. Digital Realty Trust calls these the “Special Forces” of the data center, and the need to “balance” them is the guiding component for the design, construction, and operation of the resulting data center. The process of balancing these components begins with determining the level of risk adversity that a company is comfortable with and then identifying the most cost-efficient solution to be incorporated into the data center. Calculating the final outcome of what is an acceptable price of risk mitigation is based on each company’s answers to five questions:
  • What risks are we concerned with and what is the likelihood?
  • How can we design our facility to mitigate these risks?
  • How can facility operations mitigate that risk?
  • What are the capital and operating costs of the various approaches?
  • Which investment is the business prepared to make to mitigate that risk?
After quantifying this level of risk adversity, companies are then better able to weigh the business-related trade-offs that may be associated with ensuring the maximum level of acceptable uptime for their data centers. Although the final solution may be construction-based (permanent placement of a back-up generator, for example) or operational (keeping a spare contract unit that is wheeled in as necessary) seemingly “technical” decisions actually reflect and address companies’ business requirements.

Summary

Data center capabilities are an increasingly important element of companies’ business strategies. Unfortunately, this escalation in importance continues to foster a proprietary mindset within many firms that rely on arcane and outdated logic to justify the need to build their own “unique” data centers. A data center is a standard infrastructure component. As such, it can typically be designed, built, and operated more expeditiously, efficiently, and cost-effectively by an experienced third-party provider that focuses exclusively on the delivery and operation of the facility. This division of labor allows end users to focus on what is most important to them-the administration of their mission-critical applications.”

Input 52 atwww.missioncriticalmagazine/instantproductinfo