Vendor Risk Management is a global issue. According to the World Economic Forum, cyberattack and data theft consistently ranks as a top-10 global risk. Additionally, a comprehensive survey by Deloitte found 83% of risk management executives have experienced a third-party incident in the past three years.
But are the old VRM software solutions fully up to the challenge of reducing vendor risks and associated costs? Increasingly, the answer may be no. The pressure to deliver verifiable results is causing a shift from what we call VRM 1.0 to more capable solutions which we call VRM 2.0.
Forces of Change
The coronavirus pandemic has created a dramatic increase in the number of people working from home. Perhaps as no surprise, experts are noticing an uptick in email and endpoint incidents. So, as companies ask employees to work from home, they introduce additional cyber risk exposures for their customers.
The global expansion of technology is an underlying driver of risk exposure. A survey by Salesforce indicates that 75% of executives around the world consider technical transformation to be a priority. As vendors adopt new digital technologies, they introduce new cyberthreats to manage.
Vendor risks can be very costly. Participants in a Ponemon Institute survey estimate that the average cost of a breach is $13 million in lost share value, sales, brand equity, and so on.
Senior executives realize the importance of reducing vendor risk exposures. According to the Deloitte study previously referenced, executives are especially keen to minimize the potential cost of security failures, as well as their occurrence frequency. So, the challenge for VRM is to move from process efficiency improvements to highly credible evidence of actual risk reduction. This requires an evolution from VRM 1.0 to VRM 2.0.
VRM 1.0: Automation
We refer to the first era of Vendor Risk Management as VRM 1.0. The core value-added of VRM 1.0 was to help companies automate their basic vendor risk management processes. Each of the five steps of VRM includes manual steps that can be automated.
Many of the manual and repetitive tasks in VRM are related to vendor risk assessments. They need to be created, distributed, tracked, and recorded for every vendor. These manual tasks consume vast amounts of time, and many of the functions can be automated.
When manual tasks are automated, the information security team can spend more time on detecting and reducing vendor risks. Indeed, the impact of VRM process automation is substantial.
VRM 1.0 is having a positive impact on enterprise risk reduction. The SANS Institute reports that there was a 26% drop in exposed records from 2018 to 2019, even excluding the 2018 Marriott and Capital One events as outliers. Furthermore, they show the average cost of a breach has declined by over 37%. These are clear signs of progress.
Limitations of VRM 1.0
VRM 1.0 measures indicators of risk, such as the existence of various control systems. But it does not tie control system effectiveness to exposure probabilities or costs. In other words, it does not provide a scientific measurement of actual business risk.
While VRM 1.0 reduces costs by improving efficiency, it only provides weak evidence of actual risk reduction. So, we cannot measure with confidence whether VRM 1.0 reduces company risk exposure.
Lack of Confidence
Various surveys underscore the lack of confidence risk managers feel in their efforts to manage vendor risk. For example, the Ponemon Institute found that only 35% of practitioners rated their vendor risk management program as highly effective.
Much of the problem involves basic blocking and tackling. For example, in the same Ponemon study, only 34% of respondents had a comprehensive inventory of their vendors, and 69% have not achieved centralized control over vendor management.
A Snapshot in Time
Practitioners are increasingly aware that VRM surveys only provide a snapshot of a vendor’s risk profile for a given moment in time. If the vendor acquires a new company or implements a new enterprise software application, the survey is suddenly out of date.
A growing enhancement to VRM 1.0 involves continuous monitoring. While traditional VRM programs re-check vendors periodically, some services continuously monitor internet-facing assets of select vendors for potential risks. These programs conduct cross-checks against aggregated threats and identify red flags.
VRM 2.0: Verifiable Risk Reduction
Even with enhancements such as continuous monitoring, a growing number of companies are starting to exhaust the benefits of VRM 1.0. Most of the low hanging fruit from automation has been picked, and the remaining gains require more effort.
Furthermore, as the benefits of VRM 1.0 take hold, risk managers want to reallocate their time toward activities that deliver proven risk reductions. VRM 2.0 represents an evolution from automation to verifiable risk reduction.
It is challenging to prove VRM efforts deliver tangible results. In VRM 1.0, we assume the act of requiring vendors to use better control systems results in risk reduction. In VRM 2.0, we make forecasts, set risk event probabilities, assign cost estimates, track forecasts and cost accuracy, and refine. These rigorous steps help us to enter the world of scientifically verifiable risk reduction.
Verifiable risk reduction requires three significant leaps forward. First, a more sophisticated language is required. Second, scientific forecasts of events and costs are needed. And third, we need to gather more comprehensive incident data and use AI to identify costly vendor risk patterns.
1. Rigorous Terminology
Senior executives need a standard and precise language for identifying, tracking, and managing risks across business units and geographic regions. It is frustrating when managers use different risk terms, or worse, use the same words in different ways. Unfortunately, IT and risk managers often use inconsistent vocabularies when discussing, tracking, analyzing, and reporting on security threats.
It is also essential to differentiate between inherent and residual risk. The distinction enables more sophisticated risk measurement analytics. A major proponent of these terms and associated analytics is the FAIR Institute.
2. Scientific Forecasts
The C-suite needs probability-weighted risk forecasts, and they need regular forecast accuracy assessments. In other words, risk management needs to become more scientific. Compare these two statements:
- Proposed vendor A has weak IT control systems. For example, their last penetration test was conducted 18 months ago.
- The probability that a data breach with vendor A will cost more than $1 million over two years is higher than 90%.
The first statement is “tech speak,” and the second is “board talk.” Sure, penetration tests should be conducted more frequently than every 18 months. But what is the probability that something terrible will happen? Risk managers need a more persuasive argument, and VRM 2.0 will provide it. Mission-critical decisions such as adding new vendors or dropping old ones must be quantified in terms of costs and probabilities. This is what the C-suite increasingly requires when discussing risk management.
VRM 2.0 will respond by providing probability-weighted risk forecasts. The accuracy of these forecasts can then be tracked and measured. In the example above, if the company decides to move ahead with vendor “A” without infrastructure changes, we can measure whether a data breach will actually occur within two years and whether that breach will cost more than $1 million.
Just as sales departments are required to make sales forecasts and are held accountable for them, so too should risk management departments. They should make risk forecasts and cost estimates and be held accountable for their accuracy.
3. Incident Data and AI
Vendor surveys produce tremendous volumes of vendor control system information. In VRM 1.0, vendor profiles and risk survey data are used to sort customers into risk categories. These sorting or assessment rules are not highly analytical. The general principle is “more control is better.” In other words, customers with more elaborate or extensive control systems are rated as lower risk.
AI can be used in VRM 1.0 to automatically interpret responses and categorize vendors more quickly and efficiently.
In VRM 2.0, survey data is supplemented with actual vendor incident, breach, and cost data. The additional risk data is used to train the AI system. Then algorithms are used to detect patterns within these vast data volumes, interpret their meaning, create refined algorithms, and ultimately produce better risk and cost forecasts. In VRM 2.0, cognitive insights are generated over time as the program learns and refines itself.
VRM 2.0 is a tall order, and a significant and necessary leap from VRM 1.0. Indeed, the magnitude of change makes it far more than a substantial software upgrade. VRM 2.0 calls for a comprehensive change management approach, starting with the C-suite and senior executives. To start the ball rolling toward VRM 2.0, the C-suite must establish a strategic understanding of three key topics:
- Business Trends: What broad business trends drive the growing importance of VRM?
- Downside Risks: How can vendors hurt our business and what are some real-world examples of the financial consequences for companies like ours?
- Scientific Methods: What new terms, measures, data inputs, and analyses will help us take major steps forward in vendor risk management?
Once senior executives have an updated understanding of the strategic context for VRM 2.0, they can then begin the process of driving change down through the organization. They can ask VRM teams to assess their level of maturity, identify improvement opportunities, and provide cost/benefit analyses. They can integrate VRM improvement projects into the overall investment funding picture. Additionally, they can facilitate company-wide VRM collaboration so that wisdom is shared and all departments can progress simultaneously.
Mike Kelly Gaurav Gaur