It doesn’t matter whether you manage a Tier IV data center or never bothered officially going through formal certification, a good way to think about how to provide services revolves around infrastructure, software, people, procedures, and data. These are the components the American Institute of CPAs (AICPA) uses for certification. If you are SOC 2 certified, you are likely very familiar with this framework. But how do these components work differently in different scale data centers? Is it even feasible for a small data center to think in terms of these components? I argue that this framework can be a useful way to think about better managing your data center and its security no matter what size it is.

 

INFRASTRUCTURE

Infrastructure refers to the server hardware, network infrastructure, and storage equipment that your data center runs on. The easiest way to think about it is if it’s physical, it’s probably infrastructure. This can also include the actual facilities, but for the purpose of this discussion, I’ll stay focused on the IT infrastructure.

When managing the physical infrastructure, it’s critical to have an accurate inventory. There are several tools that can help with this, and many people are also still managing inventory via spreadsheets. As your data center grows, you’ll do yourself a big favor by investing in automation tools. The key data to track in an inventory are exact model number and installed parts, serial number for main chassis and any optionally install components, internal asset numbers, key vendor end-of-life milestones, maintenance contract information, and precise location information. These are the bare minimum items of an inventory to manage all of the equipment.

Additionally, I recommend tracking install date, name of technician who installed it, last service date, next scheduled service date, and vendor announcements for the equipment. When you know what you have and where it is, it’s much more practical to successfully manage it. Having an up-to-date inventory not only makes it easier to keep service up, it can help identify gaps in your hot spare capacity as well as upcoming maintenance issues that will need your attention. One of the main differences I see between data centers that are run very well and those that aren’t is how the operators approach inventory.

 

SOFTWARE

Software is the programs and operating systems that run on top of the hardware. Many people think primarily about the programs running on the operating systems, but the base OS is probably the most critical software application to actively manage. This is typically where the most operational issues occur, whether due to configuration issues, new vulnerabilities, or simply the day-to-day updating and patching. Just like having an accurate inventory is important for the physical hardware, software inventory is important, too.

While having a good software inventory is necessary, it’s not sufficient. No matter what size data center you manage, you must have tools to automate patching and software updating. A good tool to manage the software for not only the underlying OS, but also the applications running on top of them, is the only way to ensure your services are being delivered and risks from known vulnerabilities are managed.

 

PEOPLE

People are the most important component in your framework for managing a data center. When you think about the people involved in your data center, it’s not just the technicians, operators, or system administrators. It’s everyone involved in keeping your business running; this include developers, managers, and executives. Having a clearly defined organizational structure, with clear lines of communication and responsibility, will make all of the difference when an outage occurs and in minimizing the frequency of these outages.

When reviewing your organization, identify the key missing people. Most people focus only on their organization, but the more perspective you have on the entire organization, the more obvious it might become that a personnel problem in another team is exacerbating an otherwise small problem and creating a bigger problem for your business.

 

PROCEDURES

Procedures are what often separate a smoothly running data center from a mad house. Unfortunately, the size of your data center doesn’t imply anything about how good your process is. I’ve seen a lot of companies try to make up for poor process and procedures by over-staffing and trying to buy their way out of the problem. Procedures are hard because it’s very easy to be overly specific and create a process that isn’t practical to use and is constantly out of date. Procedures should be lightweight and flexible.

 

DATA

Data is the information residing in your applications that endusers and systems require. Ultimately, the accessibility and security of your data is what determines whether you stay in business. All of the other components are supporting pieces. Do you know how data flows through your data center? For example, is all of the data for a single customer routed across other shared infrastructure that could put it at risk for man-in-the-middle attacks? If you don’t know, then you can’t possibly ensure that the data is being available when and where it’s needed, and you can’t ensure the security of that data. Whether the data is a rest or in transit, it should be encrypted and segmented as much as possible. Segmentation is a great strategy for minimizing the risk of data leaks and for ensuring operational availability by reducing the potential for cascading failures to occur.

Mapping your priorities to standards such as SOC 2 makes it easier to not only ensure availability, but also to report compliance to regulations if you are required to do so. It doesn’t matter if your data center is 200,000 sq ft or one rack of servers, having a clear understanding of the infrastructure, software, people, procedures, and data that make it up will enable you to effectively deliver secure service and keep your customers happy.