From the Fall 2013 Issue

Protecting Critical Data and Infrastructure: Testing and Certifying Technology

Tim Boyd
| The MIL Corporation

Today’s headlines are filled with worry and concern over the need to protect the nation’s critical infrastructure. Presidential Policy Directive 21 calls for a “national unity of effort to strengthen and maintain secure, functioning, and resilient critical infrastructure.” In governmental terms, “critical infrastructure” often refers to computer controlled power grids, highly automated nuclear power plants, and our intricate national airspace control systems. Amid all the hype and media attention, a more fundamental truth is often ignored—If you own a business, deliver a product, or provide a service, your infrastructure and your data are critical—at least to you! So, where should you begin to assess and “certify” the technologies or applications used to protect your critical data and infrastructure? A step in the right direction is to define your cyber security requirements r i g h t f rom the beginning. What must be accomplished, to what level, and by what component? Sounds like commonsense, and not surprisingly, corresponds to the application of system engineering and test & evaluation principles. But in practice, these same principles are often not applied or integrated until late in the assessment or operational phases. But, knowing the upfront requirements forms the basis for a sound testing strategy, and isolates the critical technical parameters needed to meet industry certification and accreditation (C&A) standards.

Twenty years ago, children attended schools and were issued textbooks. Today, they have access to tablet devices with e-books, Internet access, e-mail capability, messaging clients, and dedicated online storage. A decade ago, businesses focused on installing massive computing power on every employee’s desk. Today, there are virtualized workstations on massive clusters of servers that use Ethernet fabrics to tie them to petabytes of storage. Even telephones are moving to network transport as companies look to better utilize their existing infrastructure investments. This shift has made the protection of network infrastructure and data essential elements of day-to-day business. Now, a network intrusion, failure, or data compromise is no longer irksome and inconvenient, but potentially disastrous. So, whether you are connecting school systems or power grids, it is unwise to wait until penetration testing to prove your system’s intrinsic security posture. Certifying that the technology performs to the risk tolerance level needed within the operational environment requires the same analytical fundamentals that system engineers use to develop a system or system-of-systems solution. Careful decomposition of security functions, from the initial requirements to system design, allows for the testing process to produce a meaningful and relevant validation at the end.

A step in the right direction is to define your cyber security requirements right from the beginning. 

As the number of threat vectors has exploded, so have the number of security products to counter them. Traditional security devices are now being bundled as “all-in-one” security appliances that perform full routing, proxy, firewall, IPS, and other functions. While it can be an attractive cost savings to buy a single box instead of a dozen, this path can also create a single point of security failure. On the other hand, purchasing a rack full of redundant hardware can be a waste of valuable resources and create a configuration and maintenance nightmare. So what is the proper way to secure your invaluable digital assets? The answer, as any philosopher will tell you, is “It depends…”

One approach often implemented within security solutions is the “Defense in Depth” strategy. This multi-layered approach provides greater security defenses and forces the threat vector to utilize more time and resources for an attack. The time gained in delaying a threat allows for a better response, and greater protection of valuable assets. In computer network defense, various security products or systems are used, often in linear fashion, to create a similar effect. As large data streams are allowed to flow towards the network, they are inspected and filtered at various stages to detect and remove malicious packets and allow legitimate traffic to progress. This concept also increases the type, level, and complexity of system testing. Certainly, a greater portion of discovery will occur during the integrated system testing, but it also requires greater consideration for C&A. Tying this together should be paramount in testing strategy and plan development. This too, should include the assessment and definition of C&A strategies, plans, and resources throughout the testing program.

The sophisticated network attacks of today have forced the simple firewall boundary defense to evolve. Moving data into the modern network should be analogous to running a gauntlet of old. Intrusion Prevention Systems (IPS) deployed inline will remove known crafted attacks and malformed packets from the wire. Boundary routers can pre-filter unwanted IP traffic and Denial of Service Attacks. Firewalls enforce port and protocol restrictions. An IPS deployed inside the firewall can provide a sanity check before passing packets to the trusted infrastructure. To prevent the intentional or accidental ex-filtration of data, these systems should be configured to thoroughly inspect inbound and outbound traffic. Running these systems as individual platforms, from best-of-breed vendors, maximizes your ability to leverage the benefits of Defense in Depth. It also gives multiple physical locations on the network to validate proper handling of packets. It also requires the testing strategy to tie together the many facets of assessment to ensure both integrated testing and operational testing are robust and dynamic enough to provide the end-user with an appropriate confidence security level of the systems in place.

Now, a network intrusion, failure, or data compromise is no longer irksome and inconvenient, but potentially disastrous. 

As noted earlier, the process of building, testing, and then certifying an effective security solution begins—as most projects should—with a well-defined understanding of the operational environment, the system architecture, and design requirements. It is imperative to know your threats, understand your mission and infrastructure, and document your security requirements before making any decisions regarding security architecture. Often, success of any design focuses on the how well the requirements relate to the operational conditions. As the requirements develop, the decomposition of functional activities becomes apparent. From here, the architecture begins to form along with the testing strategies and key test parameters, and what must be certified – both as a static and operational solution.

Leave a Comment