In today’s data center environment, managers are tasked with packing maximum computing power into the smallest possible footprint, or in many cases, the existing (read: undersized) footprint.

This trend paints the scene of a conversation between a CIO and an operations manager reminiscent of the iconic porridge scene from the film Oliver Twist, “Please, Sir, I want some more.” … “What!?”

More often than not, the option to grow into a bigger footprint isn’t possible. Data center managers must make do with existing space — and that existing space is valuable. At the extreme, it was estimated that in 2007 Google spent nearly $3,000 per square foot at its North Carolina facility.1 More conservative estimates project that the average data center square foot costs around $1,000. According to Anixter’s white paper, “Data Center Design and Infrastructure Estimates,” price per square foot ranged from about $450 in a Tier I data center, to $1,100 in a Tier IV data center.2 Simply put, the usable floor space is valuable. It’s imperative to make efficient use of the available space.

 

MOORE MEANS MORE

Moore’s law estimates that processor speeds will double every two years — integrated circuits will become smaller and speeds will increase. Evidence of this is displayed in the active equipment deployed in the data center. For example, in 1995, the GBIC transceiver was first produced and became widely used in switches and routers. Fast forward a few years, and a new hot-pluggable module around ½ to 1/3 of the size of a GBIC was introduced — the small form factor pluggable module (SFP).3 The SFP transceiver was designed for 1 Gbps transmission, SFP+ for 10 Gbps, and subsequently QSFP transceivers are able to support speeds up to 40/100 Gbps, ushering greater speeds in a smaller footprint. According to Uptime Institute’s Inaugural Data Center Industry Survey, data center managers are certainly taking advantage of technological developments. In fact, about 60% of respondents expect to upgrade, renovate, or build a new data center within the next three years.4 With higher density equipment and higher speed comes the need for more connections, often times within the existing data center footprint. So, how can a manager best utilize his or her facility while keeping operating expenditures (OPEX) at bay? Look to Layer 1 — the physical layer.

 

OPTIMIZING SPACE WITH LAYER1

There are plenty of ways equipment manufacturers have kept up with Moore’s law by packing more computing power into smaller footprints. For example, new media like high definition (HD) video capabilities on personal devices, new server design, and optimized power and cooling solutions have contributed to higher densities in the data center. However, managers must now focus on infrastructure by increasing physical rack density, using open space outside the rack to free up space within the rack, and “going vertical” as new ways to gain space. Managers must also keep operating expenditures (OPEX) low, and should consider the adverse effects of added cable density on airflow for passive cooling. With the average cost per square foot hovering around $1,000, space optimization is imperative. Choosing the right passive products at Layer 1 is an excellent place to start.

 

FOUR WAYS TO OPTIMIZE SPACE AT LAYER 1

  • Make better use of the rack unit. Consider rightsizing connector port densities by planning for future additions to the network. As fiber densities increase from 72 to 96 to 144 LC connections per rack unit, consider selecting the right density to grow into your footprint. Also consider that LC (2 fiber) connectors are currently being replaced by MPO (multi-fiber, typically 12 or 24 fibers) connectors as 10 GbE (gigabit Ethernet) migrates to 40 GbE and 100 GbE. Consider deploying parallel optics as an option to support future needs.

For copper deployments, there are solutions available to double rack unit (RU) densities by deploying 48 port patch panels in a single RU. Consider high-density connectivity mounting options that support mixed media to deploy copper and fiber in the same rack unit. Mixed media patch panels provide flexibility and scalability by allowing media to be changed or connectivity to be added as needed, eliminating the need to know the exact number of ports or connector types. Maximizing connector density in the RU makes valuable square footage available and opens space in the rack for active devices.

Use cable management to your advantage. There are numerous approaches to optimize the rack space. First, consider the style of patch panel being deployed. Angled panels help route cords to the vertical cable manager, eliminating the need for horizontal cable management and freeing up valuable RU space. Selecting an angled patch panel with mounting ears that recess the panel into the rack space can reduce how far patch cords protrude from the front of the rack or cabinet. In cases where horizontal cable management is preferred, consider the use of a zero RU horizontal manager to help route cables to the vertical cable manager.

Deploy smaller diameter cables. Smaller diameter cables support high densities while improving airflow. Twenty-eight AWG copper modular patch cords can support up to CAT 6A channel performance in high-density applications. The reduced diameter cords (RDC) help reduce congestion and improve airflow through cabling pathways. For example, Category 6A RDCs occupy less than 66% of the space required for similar performing MC6A cords. This easily allows over 1.5 times as many cords to be deployed into less space than what is required for 26 AWG cords. RDC cords also have a much smaller bend radius limitation of just over 0.77 in. Tight bend radius can help when cable routing is restrictive or in very high-density applications.

The shrinking design of fiber cable diameters from 3mm to Uniboot 2.4mm, and now 2mm (with talks of reducing to 1.6 mm) have certainly helped increase fiber cable density. Even so, the data center is being squeezed to produce more with less.

Patching outside of the rack or cabinet. Grow the physical footprint by increasing cable density and rack space within the same floor square footage. Overhead racking solutions can add up to 16% more rack space (up to 8 RUs) without taking any additional floor space. Select overhead racking solutions that are flexible enough to mount in multiple locations, including mounting to a cable tray like Cablofil and tubular runway, or hang from a ceiling using threaded rods. Overhead racking also provides a patching solution for specialized storage area network (SANS) cabinets when there is no standard patching field in the cabinet.

Consider a cabinet solution that offers patching locations outside of the traditional RU space. Options are available with side patching locations, allowing for up to an additional 8 RU of mounting space outside of the existing patching field.

 

CONCLUSION

In today’s data centers, space is at a premium. As speeds continue to increase and densities skyrocket, managers will continue to be tasked with the challenge of optimizing the expensive square footage in his or her facility. Over time, there have been major advancements in the way networks are deployed, powered, cooled, and managed, but density challenges remain a constant. Next time the question “More!” is posed, consider looking to Layer 1 or passive infrastructure solutions to optimize the use of the existing footprint. 

 

FOOTNOTES

  1. Data Center Knowledge. “Google Data Centers: $3,000 A Square Foot?”

  2. Data Center Journal. “The Price of Data Center Availability”.     

  3. Fiber Optic Components. “The Evolution and Trends of Fiber Optic Transceivers”.

  4. ibid