Lawrence Livermore National Labs (LLNL) is world-renowned in its role as a premier applied science laboratory. Since 1952, this Northern California facility has operated as part of the U.S. National Nuclear Security Agency and has been the home of a nearly inexhaustible list of discoveries and innovations.

But behind LLNL’s cutting-edge research activities is the “business” end of things-administrative offices, staff, and an IT infrastructure not unlike any major, information-centric business enterprise. Recently, LLNL designed and built a new data center to support these IT needs. Not only was the facility designed to service LLNL’s own administrative IT needs but also to serve as a co-location for enterprises wishing to move their own data center operations to this secure, high-tech facility.
 

This figure and the central patching configuration represent a down-scaled version of the basic physical layer configuration employed at the LLNL colo facility, with central network cabinets each serving multiple server cabinets.  The images show front and back representations of how the configuration would appear with tenant equipment installed.

 

“It was an interesting challenge,” explained Jim Herbert, LLNL’s data center cabling manager. “We had to design an entire data center based on the anticipated needs of unspecified future clients. High-availability business processing and data capabilities were the central criteria, but we knew we had to build in a great deal of scalability and managed adaptability.”

Mindful of these primary requirements, LLNL commissioned the California Data Center Design Group (CDCDG) to design the facility, implementing cutting-edge “modular” design practices in anticipation of the internal growth potential for the facility. “The modular design incorporates an infrastructure backbone that can be expanded rather than rebuilt once existing load or growth potential is realized” stated Hughes. “Flexibility and growth were at the forefront of the mechanical, electrical, and telecommunication design concepts.”
 

A top view of the setup shown on the previous illustrations indicating how the trunk cables run from patch panels in the networking cabinet to panels in the server cabinets. This trunk cable and panel setup is the pre-installed  core of the "move-in ready" configuration.

 

CDCDG’s modular design met other LLNL criteria as well. Once installed, the infrastructure was simple enough to allow small-scale installation phases and ongoing moves, adds, and changes (MAC) to be performed by in-house LLNL IT staff members. Although an outside contractor performed the largest installation phases, the facility’s robust security clearance procedures would not efficiently allow outside support for subsequent MAC work.

Additionally, because the data center was designed as a co-location facility that would essentially be sold to various executives, its visual appeal had to match its performance capabilities. “Aesthetics were a big part of the design challenge,” added Herbert. “A prospective tenant can show up at any time, and the facility has to appear every bit as organized as it actually is. No matter how many adjustments we make to the cabling plant, it has to look as neat as the day we opened the doors.”
 

Pre-engineered Problem Solutions

While the long-term goal for the project was to develop an infrastructure that could be largely managed by internal staff, LLNL and CDCDG worked closely with Arkatype, a Laguna Hills, CA-based infrastructure consultant and installer company during the initial implementation phases. “LLNL’s critical need for reliable performance was actually very straightforward. The challenge was a reliable cabling plant that could be easily moved and changed by internal staff without jeopardizing system performance,” explained Arkatype’s Michael Cantrell. “While the staff was highly technical, a system that required them to perform time consuming and craft-intensive field terminations introduced a likely failure point.”
 

 

After reviewing the CDCDG design as well as LLNL’s specific goals and needs, Cantrell suggested pre-engineered cabling solutions for all permanent data center links. After a thorough review of available options, LLNL agreed that horizontal channels would be supported by Siemon Premium 6 pre-terminated copper trunking cable assemblies and Backbone duties handled by Siemon’s 10Gb/s-capable XGLO plug and play fiber optic cabling system.

Siemon copper trunk cables consist of six individual cabling channels, each terminated at both ends with MAX outlets. These channels are contained in an overall industrial mesh sheath, which protects and organizes the cabling during installation and later MAC work. Utilizing individual outlets, Siemon trunks present a smaller pulling profile than bulky cassette-based versions, allowing installation in tight pathways and smaller cabinet openings. The individual modules then simply snap into a wide array of Siemon MAX patch panel solutions.

The XGLO 10Gb/s plug-and-play fiber solution utilizes a combination of pre-terminated and tested fiber modules and simple MPO fiber connectivity. Up to 12 fiber connections can be quickly deployed by plugging in a single MPO connector into an XGLO plug and play module and snapping the module into any of Siemon’s fiber optic enclosures. Individual plug-and-play modules can support up to 24 connections, with two MPO connectors.
 

A Best Practice Approach

These pre-engineered cabling solutions met all of LLNL’s core needs, including the critical need for high availability and performance. The Premium 6 copper assemblies used in the data center’s horizontal channels are factory terminated and tested, with full test reports included with each assembly. Likewise, the XGLO plug-and-play modules and connectors are fully tested and performance-validated before leaving the factory.
 

 

The high-quality factory terminations provide an interlinked benefit between LLNL’s need for performance and its need for simplicity. By eliminating the need for on-site terminations, they removed the performance variability inherent in field terminations. Moreover, field terminations require highly trained technicians to ensure performance. With pre-engineered cabling, internal LLNL IT would be able to simply and quickly deploy high-performance permanent links.

Beyond eliminating field terminations, the copper trunking cables and fiber plug-and-play modules offered other deployment simplicity and speed benefits. Both product sets adhere to a “made-to-fit” approach. LLNL was able to order the exact length and configuration required, dress them into their pathways and plug them in. This significantly reduced onsite cable installation time and disruption, shaving about 75 percent from traditional field-terminated installation time.

As a colocation facility, the modular configuration is expected to provide future management benefits as well. Servicing the varying needs of tenant connectivity will require a great deal of flexibility and scalability in the cabling plant. The modular links provided by pre-terminated solutions can be easily moved to where connectivity is required, without the need to re-terminate. And in the likely event that additional channels are required, new trunking cables can be added with minimal disruption, using internal LLNL resources.

Moreover, LLNL feels that pre-terminated links will assist in the long-term management of the cabling plant. Because all data center links will be consistently deployed with a common product set, pathways are far less likely to become disorganized due to ill-managed increases in individually field-terminated links. Poorly planned MACs, most often a result of unchecked individual field terminated channels, are an extremely common source of data center cable management issues. Troubleshooting, channel tracing, and the orderly management of MACs are all simplified with a pre-engineered solution.

Associated with simplified management, Siemon assemblies helped LLNL create a consistent aesthetic appeal-a benefit of significant importance in a co-location facility. LLNL’s IT staff will in essence, sell the benefits of the data center to prospective tenants, many of whom may not be IT experts. According to Herbert, “Nice and neat goes a long way. It makes it easy to communicate the quality of the facility to decision makers that may not be well-versed in data center infrastructure.”

Input 100 at www.missioncriticalmagazine/instantproductinfo