The application of statistical tools to business continuity management

Get free weekly news by e-mailBy Patrick Roberts.

As a consultant, I spend a great deal of time warning clients not too put too much faith in probability and statistics. My argument is three-fold:

* The data that you need to model the practical situations that we are dealing with in business continuity management are generally not available;

* Even if historical data are available one cannot assume that these will accurately predict what will happen in the future; and

* Even where the data is available and has the necessary predictive power, people usually get the calculations wrong.

By way of illustration, I would point to the number of ‘Once-in-a-thousand-year’ events that have happened recently in the banking industry!

The arrival of the Basel II Capital Accord has stimulated a lot of research into risk management techniques particularly in the area of operational risk (which is included explicitly in the new accord for the first time). In particular the accord aims to provide incentives to banks to accurately quantify the risks to which they are exposed so many new quantitative techniques have come to the fore. Maybe the time has come to soften our line and accept that some of these techniques have a useful place in the toolkit of business continuity management professionals? In this article I look specifically at two powerful modern techniques that have gained widespread acceptance in other areas of risk management and examine whether they can usefully be applied to business continuity management.

Extreme Value Theorem
Extreme Value Theorem (EVT) was originally developed in the 1950s to calculate the probability of extreme meteorological events. It aims to calculate the probability of very rare events – the ‘tails’ of distributions. The theory has since been applied to the incidence of other natural phenomena; rare medical conditions; insurance losses from various sources; reliability engineering and movements in the financial markets. One simply fits a member of the family of ‘Extreme Value Distributions’ to the historical data and can then calculate the probability of extremely rare events: even those so rare that they haven’t yet occurred. Having a method to accurately calculate the probability of unprecedented events would indeed be a powerful tool for business continuity management.

There has been much debate in the context of Basel II about the applicability of EVT to operational risk losses and opinion is divided. The intrinsic problem is that, given that one is modelling extremely rare events, the data to fit to a distribution will always be scarce. The more data that you include (eg by going further back in time; by including less extreme events or bundling different classes of events together) the more confidence you can have in the mathematical validity of the fit. However, the inclusion of more data may contradict the central tenet of EVT - that all the events come from a single probability distribution - and thus render the results meaningless. This is likely to be a particular problem in applying the technique to business continuity management.

The following example illustrates the problem. Some time ago, whilst preparing a presentation on the implications of terrorism for business, I sought to apply EVT to the losses (human and financial) of terrorist attacks. I focused specifically on terrorist bombings in the Western world since 1992 resulting in more than 10 casualties. Given the tiny size of the data set though, I was unable to achieve a convincing fit. By contrast, Bogen and Jones have recently published an analysis of all terrorist attacks from 1968 to 2004. Using EVT, they predict that by the year 2080 we can expect to have witnessed at least one terrorist event causing over 10,000 casualties somewhere in the world. By bundling terrorist attacks by various different methods (guns, bombs etc) together and using data over a period of 36 years, the authors achieved a statistically valid fit but one has to question whether these events are truly drawn from a single distribution.

The other main drawback with EVT is that it is rather opaque and the results are not very user-friendly; leaving a real question over whether one can ever convince a non-specialist of the validity of any results. Ultimately, as a consultant, one also has to ask (how much) will anybody be willing to pay for this sort of analysis?

Monte-Carlo Simulation
Monte-Carlo simulation has been in use for many years but has only become practical for most people as a result of the vast increase in computing power available over the last 20 years or so. Meanwhile, the practical implementation of Monte-Carlo simulation has been greatly simplified by the emergence of Excel add-ins such as @Risk™ and Crystal Ball™ which automate the detailed mathematical work. It is used to calculate probabilities where there is no analytical formula and has been widely used in fields as diverse as:
* Engineering design and quality management;
* Valuation of projects and companies; and
* Pricing of financial derivatives.

Applying the technique to calculate, for example, losses from business disruption is, in principle, very straightforward:
* Identify distributions for both the frequency and severity of losses;
* Generate a random value from the frequency distribution to represent the number of losses in a given period;
* Generate random values form the severity distribution for each loss and aggregate to give a total loss for the period;
* Repeat many (several thousand) times and plot an overall loss distribution.

First of all they examine how many outages they suffer per year and it turns out that the mean number of outages in a year is five. They then look at the cost of these outages including:
* Overtime to catch up with lost work;
* Lost customer orders;
* Mis-processed customer orders; and
* Delays / mistakes in billing customers.

This analysis reveals a loss distribution (per outage) as shown below where losses are expressed in terms of a percentage of annual profits. A quick look at this chart suggests that typical losses for an IT outage are equivalent to 5-6 percent of profits. This leads us to think that, based on 5 outages per year, spending anything up to 25 – 30 percent of profits to eliminate these outages would represent value for money. The graph also highlights that there are occasional incidents with much more severe consequences.

Putting these frequency and loss distributions together and running a Monte-Carlo simulation 1000 times yields the following distribution for annual losses.

The first impression is that this confirms our initial ‘back-of-the-envelope’ estimate: expected losses are around 30 percent of annual profits, so spending anything up to this amount to eliminate the problem is value for money. However, the simulation gives far more information; in particular there is a significant risk of very high losses eg a 7 percent chance of losing 70 percent of profits or greater. Losses of this magnitude could jeopardise the whole future of the company so even greater spending on mitigation is actually justified.

Summary
I have tried to outline the pros and cons of two specific statistical techniques in the context of business continuity management. I am not aware of either of these techniques being widely used in the business continuity management profession so it would be very interesting if anyone can offer any examples of how they have applied them.

At present it would appear that the efficacy of Extreme Value Theorem is still severely limited by the lack of data on these rare events. However, it would appear that Monte-Carlo simulation could be a valuable addition to the normal, qualitative, tools of business continuity management; particularly in organisations that are used to using modelling and simulation tools such as Engineering and Financial Services firms. In contrast to EVT it is much more intuitive and convincing, particularly as it can easily generate user-friendly graphical output. Bear in mind though that the model will only ever be as good as the input data on which it is based.

References
Bogen K T and Jones E D (2006). Risks of Mortality and Morbidity from Worldwide Terrorism: 1968-2004. Risk Analysis, 26, 1, 45 – 59.

Patrick Roberts is a consultant with Needhams 1834 Ltd
http://www.needhams1834.com/

Date: 20th April 2007• Region: UK/World •Type: Article •Topic: BC general
Rate this article or make a comment - click here





Copyright 2010 Portal Publishing LtdPrivacy policyContact usSite mapNavigation help