Among the UK supervisory authorities’ proposals and expectations for building operational resilience, the rules on “impact tolerance” have generated substantial debate and confusion. Mostly, the discussion has centered around how firms would go about quantifying that point in time when the viability of their important business services and processes is irrevocably threatened by a disruptive event and the factors or metrics that would inform their estimation.
In a July 2018 discussion paper, the supervisory authorities issued a vague definition of impact tolerance, describing it simply as a firm’s tolerance for disruption to a particular business. Although minimal guidance on how firms would implement this key operational resilience concept was provided, it was clear the regulators viewed impact tolerance differently from traditional recovery time objectives, and were open to exploring more realistic, industry-led methods of quantifying the maximum acceptable level of disruption to businesses, particularly in severe-but-plausible scenarios.
The series of coordinated consultation papers released in December 2019 reaffirmed this viewpoint. In those proposals, the supervisory authorities stated they had refined their approach to impact tolerance based on feedback from, and engagement with, industry stakeholders. Specifically, they proposed that firms consider using a time-based metric to define that point of irrevocability beyond which an important business service would pose a risk to either its own safety and soundness or the stability of the wider financial system. However, with the view that a metric based on time alone may be insufficient, they also suggested that firms consider such variables as volumes and value, and factor in the following when quantifying the maximum acceptable level of disruption:
- Harm to consumers or market participants
- Harm to market integrity
- Policyholder protection
As the supervisory authorities continue to gather feedback on these proposals – regulated institutions have until April 3, 2020 to submit responses to the Bank of England (BOE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) – there may be an opportunity for the industry to coalesce around a common and effective method of quantifying impact tolerance. One serious method that has emerged is the FAIR (Factor Analysis of Information Risk) methodology, first introduced in the book, “Measuring and Managing Information Risk” by Jack Jones and Jack Freund, and now chosen by The Open Group as the international standard information risk management model.
The FAIR Way
Managing risks is fundamentally about making strategic decisions, such as prioritizing critical risk issues and investing in risk mitigation strategies that would yield the greatest impact. Business leaders can make these key decisions effectively when they have access to quantifiable risk analytics.
Given that regulators have proposed that firms express impact tolerance in a clear and sufficiently granular term so that it can be applied and tested, many common risk quantification methods, which tend to express risks in ranges or with high-medium-low scoring, would not suffice. FAIR has proven to be an effective way to derive a financial representation of risk or loss exposure. Under the FAIR model, the primary factors that make up risk, such as a loss event frequency and loss magnitude, can be described mathematically, allowing firms to calculate risk from measurements and estimates of those risk factors. Different forms of loss, including productivity, response costs, replacement costs, reputational damage and competitive advantage can be quantified with FAIR.
With impact tolerance, the loss magnitude branch of the FAIR model can be used to understand the financial exposure of certain events, and the inputs of the model can be adjusted to account for different durations of a resilience incident and certain losses during that timeframe. Assume a firm determines that its most important business service cannot tolerate delivering less than 10% of normal operating capacity for more than 10 days – an impact tolerance that combines time with a volume-based metric. What FAIR allows the firm to do is model potential loss exposure and probability of certain losses occurring at various stages within that 10-day period: for example, on day three, four, six or eight. For instance, the model might suggest that there is an 80% chance of a consumer bank losing more than $100 million if its retail banking services are operating at less than 10% of normal capacity in 10 days. Armed with this crucial FAIR model output, the firm can determine what actions it can take to remain within impact tolerance, including developing various time-critical triggering mechanism in advance to respond to disruptions as they occur and progress.
So far, the UK supervisory authorities have remained model-agnostic in their published proposals and expectations on impact tolerance – and, it is unclear whether they would formally embrace the use of any specific methodology or model for this exercise. Additionally, coming to a consensus on a risk model may be a challenge, given financial firms have their own preferences for specific risk assessment models and frameworks.
While many of the models currently used are useful for defining and assessing risk management programs and do prescribe the need to quantify risk, most leave it up to the practitioners to figure it out. Some are silent about how to compute risk, while others are open to allowing third-party methods. FAIR is complementary to many of these risk assessment models – meaning the methodology can be leveraged on top of those frameworks.
In the absence of guidance around how to compute impact tolerance, firms can still benefit from implementing FAIR calculations and data given its flexibility in assessing all forms of risk and loss. Outside of the impact tolerance, firms can use FAIR to make reasonable assumptions around operational downtime and customer harm for their important business services. In this time of great uncertainty, all financial firms can benefit from strong quantifiable risk assumptions even if they are not required by regulation.