Kevin Heslin, editor ofMission Critical, elicited some interesting opinions about the state of the data center industry from Cyrus Izzo in this short interview. Cyrus is the co-president of SH Group, Inc. and of Syska Hennessy Group, Inc. He is the vice president of the National 7x24 Exchange and sits on its board of Directors. He also currently serves on the Boards of the ACEC Metro NYC Chapter and the New York Building Congress, and is a member of the IEEE and CoreNet Global.

KH: What do you see as the key trends driving the industry?

CI:Everyone is trying to do more with less. That is the reality of today’s business environment, and I cannot think of a market sector that has not been impacted by the new economic belt tightening. What that translates into on the operational side is that organizations are now functioning with fewer IT, operational and facilities staff members. They are also focused on growing their data center efficiency, so there are trends toward optimizing power usage effectiveness (PUE), increasing critical load density and reducing resource consumption, i.e. environmental responsibility.

The most important trend that we see coming out of this economic climate is an increased awareness of TCO [total cost of ownership]. Whereas in the past we have seen attention across most markets on the front end CAPEX [capital expenditure] of data centers and mission critical projects, now there is much more discussion about OPEX [operational expenditure] over the long term.

 

KH: TCO is not a particularly new concept. Why do you think it is suddenly becoming more important?

CI: You are right. TCO is not a new concept. But it has become a critical consideration for people wrestling with the challenge of managing their operations within restricted budgets. There is a real understanding by owners that if they manage OPEX closely and balance it with smart CAPEX, that the long-term financial impact will be optimized.

In today’s regulatory environment, we anticipate that the economy is going to pretty much remain in its current state for at least two more years. Those of us in the industry need to face the fact that this is the new ‘normal’ that our business clients have to function in. So then the challenge that they are facing becomes, “How do you scale back your team and operations while maintaining a robust platform focused on your customers?”

As pressure is mounting on all sides, by customers, managers, and boards, to pencil out any costs that are not strictly necessary, organizations are being creative. They are demanding flexibility and modularity to be able to expand their facilities incrementally in response to market conditions. It means revisiting location impacts, [site selection], energy costs, and manpower costs with a very strategic eye on the bottom line so they can make informed decisions based on all of those TCO factors.

 

KH: How does the trend to lower TCO affect IT operations?

CI:The trend to lower TCO is pushing CIOs and CTOs to consider all options on the table for IT operational support, so that they in turn can make the right decisions about investing in their IT infrastructures going forward. In fact, that is the conversation that is going on right now at the highest levels of almost every organization, “Should we build, operate, and own our own facility, or does it make more sense to outsource and collocate our data center?” The largest data center capital expenditure is wrapped up in site infrastructure. When you add that to operating expenses and energy costs, it makes up about 70 percent of the total cost, with the balance being the actual IT equipment.

More and more organizations are recognizing the value of colocation. It is a tactic that frees limited IT staffs to focus on project-specific work rather than operational support of their data center. Logistically, outsourcing is a very different business model, one that takes an organization’s data center investment off the balance sheet.

Other organizations will rely on cloud computing; when their business model outstrips their ability to provide capital outlay for new infrastructure, they can leverage cloud platforms and services to maintain their business momentum. Again, IT staff can focus on product and services with infrastructure effectively outsourced to a third party.

CIOs and CTOs, in mapping out IT operations, have to forecast their need for expanding capacity over the whole life cycle of the facility. As much as any forecast can be accurate, it can help benchmark costs of the various cloud, collocation and DC ownership scenarios.

From the IT side, we are seeing that industries are following a variety of strategies, based on their experience and the demands of their operations. For example, the financial industry has deep roots in the build-own-operate model; they are simply scratching the surface of outsourcing.

We have all seen that the health-care industry has been under pressure from EMS (electronic medical records) regulations, so they are relative newcomers to the IT game. It means that their internal staff may not yet be familiar with the skills necessary to back up their data and run a data center efficiently and effectively. With the criticality of their IT needs, it would make sense in that case to review the outsource model closely.

On the other hand, telecommunications firms have owned and operated their own data centers from day one. They would be hard pressed to find a good reason to outsource or colocate when their business model and business culture have been so reliant on autonomy.

Recently, there has been a fair amount of discussion about a potential shortage of colocation facilities. Data by Ted Ritter at

Nemertes Research that has recently been published that indicate that available colocation space will more than double (113 percent) over the next several years, but it will be outstripped by demand unless colocation service providers accelerate their current rate of expansion.

 

KH: How does lower TCO and its effect on overall IT operations affect the design of modern data centers?

CI:Savvy data center owners understand that optimizing TCO begins with a detailed site selection search at the outset, when they factor in all of the local site conditions that could be leveraged during design. Choosing the right site, one that offers the optimal climatic or environmental conditions, means that there are more design options as the project gets underway so there is ultimately greater potential for the project to realize significant TCO savings.

Once design is under way, the process of exploring all options so that design choices begin with passive systems before moving into all of the active strategies that are available, will allow us to design systems that embrace the environment. With what we in the industry know about energy-efficient design and the modeling tools that are now available to us, it is really more a matter of data center professionals committing the time and resources that are necessary to realize all of the OPEX opportunities.

The future state, if we do our job well on the design side, will occur when we are able to remove chillers from facilities totally, when we leverage the free cooling options available to us. We are moving forward toward that future state fairly rapidly, as tools and knowledge drive innovation.

 

KH: What skills do data center professionals need to cope with the changing customer and end user requirements that change how we build, deploy, and use data centers?

CI:It is a whole new landscape for a data center professional. Customers and end users continue to be pushed by management and clients to do things smarter, faster, and cheaper. It is now a given that there is zero tolerance for downtime; with click on pay-per-view, one click purchase, downloads, and financial transactions, customers expect things to happen in a nanosecond. The reality is that you can lose customers instantly if there is a glitch, and we all know the focus is all about positioning yourself for emerging opportunities and retaining the clients that you have.

As data center design professionals, we have to be nimble enough to understand that uptime reliability is the new baseline. It is a given. But there is also the expectation for robust, resilient, and reliable performance in data centers.

Now it is up to the design community to bring it to the next level. With ASHRAE expanding the allowable range for IT equipment thermal envelopes, it opened the door for us to accept new opportunities for efficient design without compromising reliability. That means using all the tools at our disposal. Putting all the tools in our arsenal together makes it possible for us in the industry to deliver an accurate model, which is essential for contractors and vendors to have before the first shovel hits the ground. Ultimately, on the design side, we are obligated to provide owners, and their facility teams, with a robust model to help them operate and maintain their facility over its lifetime with less staff, at lower cost, and with the tools they need to succeed.