Imagine this scene in the Boardroom of Company A. There is a heated discussion about cyber security risks. The CISO and IT security professionals have explained malware and ransomware, DDoS attacks, and zero-day exploits and have ranked each risk as high, medium, or low. The CISO wants all medium risks to be mitigated, but Management feels differently, mainly because multiple risks are deemed medium. They are uncomfortable with the term “probably likely.” They want to understand how that differs from “somewhat likely.” What’s a Board to do? Which risks should be focused on first? Where will the company’s cybersecurity investments have the most significant impact? Heated indeed.
Now imagine the same scene in the Boardroom of Company B. Here, the CISO and IT security professionals have explained the cyber risks, but they’ve also put clear numbers around these risks. The Board is aware that a malware attack on the company network could cost the business $2.5 million in losses, and there’s a 65% chance of that loss occurring in the next three years. When ambiguous risk terms are presented with hard numbers, the Board’s decision is unanimous. No heated discussion necessary.
Injecting accuracy and clarity into cyber risk assessments using numerical terms is known as risk quantification.
According to ISACA, cyber risk quantification (CRQ) has been adopted by many organizations to understand their cyber risk exposure and aid in their decision-making. The definition of risk quantification is quantifying the dollar impact of a risk-based event to help organizations evaluate their cybersecurity risk landscapes, assess and control threats, and estimate potential financial losses. This enables boards to make strategic decisions about critical next steps, such as how much to invest in cybersecurity and how much cyber insurance coverage to buy.
The Cybersecurity Imperative found that companies using quantitative risk assessment models are ahead of the game in digital transformation and boast a higher cybersecurity performance overall. But that’s not all. According to a 2021 U.S. Digital Trust Insights report, 81% of companies adopting cyber risk quantification indicated it helped increase productivity and hone in on strategic decisions.
The first risk quantification model designated as an international standard was published by The Open Group® under the name Open FAIR™. The Factor Analysis of Information Risk (FAIR) framework helps organizations first inventory, categorize, and quantify their assets at risk using dollar values. Conducting a FAIR risk analysis highlights the organization’s vulnerabilities, and helps prioritize cyber defense activities, select cost-effective solutions, and boost the ROI of cybersecurity tools.
FAIR Risk Analysis generally has four stages:
1. Identify scenario components and determine assets at risk
2. Evaluate Loss Event Frequency
3. Evaluate Probable Loss Magnitude
4. Derive and articulate the risks
Both qualitative and quantitative risk analysis aim to determine the likelihood of a specific risk event occurring and the expected impact, with the goal of determining severity. However, they each take different approaches.
Qualitative risk analysis is subjective as it utilizes expert knowledge and individual experience to determine risk probability. This subjective approach is more along the lines of the discussion happening in Boardroom A.
Quantitative risk analysis uses objective and verifiable data to analyze the effects of risk. This is the approach taken in Boardroom B.
Risk quantification can benefit multiple stakeholders. CISOs and Risk Managers gain a deeper understanding of risk impact to help them make data-driven decisions. Boards gain visibility into risk to the business in terms of economic value and can prioritize cybersecurity investments.
Major standards’ bodies and regulators also support CRQ. Authoritative bodies advocate for the use of FAIR and how it aligns with their frameworks. For example, the Payment Card Industry Security Standards Council (PCI SSC) determined that FAIR is compatible with its views on risk assessments. Other frameworks, such as ISO and NIS, show how their frameworks map to the Open FAIR standard. Risk quantification is no longer considered leading edge; it’s now a best practice in modern cyber risk management.
There are several different cyber risk quantification methods. They include:
Heuristic methods, such as the Delphi method, rely on consulting with experts to predict risk levels. Using their experience-based or expert-based techniques, a conclusion can be reached. For example, an organization that experienced a data breach needs to strategize the best way to communicate with stakeholders. Using heuristic risk quantification methods, the organization would consult several Subject Matter Experts (typically outside of the organization) on how to best handle communication to internal and external parties. These experts would need to be given a set of inputs to help them reach their conclusion.
Expected value methods, such as the Monte Carlo Simulation method, are used to determine the probability and impact of different risk exposures. It simulates a cyber risk event - like a ransomware attack – thousands of times to enable the organization to predict the financial losses that could be incurred from different scenarios. For example, they may look at the ransomware attack using a statistical model, and determine the value of residual risk based on preventative controls like having a Cloud Access Security Broker in place. Alternatively, they may determine risk value using detective controls like implementing an Event Log tool –such as Splunk – to detect breaches, or by taking a corrective action such as implementing a tool that reactively removes access to an attacker.
Mathematical modeling methods use theoretical models to determine possible outcomes. These models typically use both linear and non-linear equations, including fuzzy sets that incorporate vague values and inexact information – commonly used in AI neural networks. For example, an organization may use fuzzy sets when assessing whether to undertake an engineering project. Incorporating vague information such as project funding and staffing can form the basis of the mathematical modeling to determine outcomes.
Interdependency models, such as influence Diagrams, show how the decisions, variables, and desired outcomes relate to one another, enabling organizations to see how each factor impacts the other. For example, a business evaluating several location options for building a new warehouse would use an Interdependence Model to look at factors such as cost, space, output, workforce, and revenue impact on the business before making the decision.
Empirical methods, such as regression analysis, analyze the interrelationships between historical project variables to improve future performance. This benchmarking requires a lot of data beforehand but determines the extent to which a relationship exists between two variables. For example, to determine the risk of a third party being compromised, a benchmarking cyber risk quantification method would analyze their activity over the ten years they have been in business. If the company was breached five times, that would quantify the impact and the factors that went into that assessment.
While many of the risk assessment frameworks provide clear guidelines and procedures on how to measure cyber risks, here are a few best practices to get you started:
Using risk quantification to inject clear numbers into your Board discussions can go a long way in enhancing communication with stakeholders and leadership. Putting cyber risk assessments into numerical terms helps the Board understand how risk decisions directly impact the organization and its future.
Using a data-powered Compliance OS platform from anecdotes, the Compliance leaders, empowers you with the data you need to quantify risks into actual figures your Board will be sure to understand. Discover how to get the data you need automatically and efficiently.