Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Near misses and direct hits

By Jim Preen.

The argument goes like this. Business continuity is not about the little disturbances and the day-to-day interruptions. That’s incident management; and not to be confused with a major incident that knocks out all your IT or leaves your HQ a smouldering wreck. Incident management is a different beast from business continuity management and requires different processes and resources.

Sure, you can argue about where the line is drawn between incident management and business continuity, but they are different disciplines and not to be confused.

But is this a correct or, indeed, a useful distinction?

Let’s look at IT for a moment. Systems are often brought down by a series of small disasters that can create more frequent downtime than a really big catastrophe. If you have a number of small-scale outages you may irritate staff and clients as much as if you’d undergone a full-scale incident.

Smaller disasters are often handled ad hoc and if the problem is easily overcome people assume that planning isn’t necessary. Conversely if the issue isn’t quickly resolved the blame game starts and staff complain that plans and training are inadequate.

But there’s more to it than that. Anyone who has spent any time in crisis management or business continuity knows that it’s often small disregarded incidents that can stack up to produce a catastrophe of really horrible proportions. Smaller events can be signs of a far greater malaise, but if they are either superficially fixed, or ignored, they may well come back to bite.

A recent study, ‘How to Avoid Catastrophe’ by American academics Catherine Tinsley, Robin Dillon and Peter Marsden, published in the Harvard Business Review, looks at what the authors call the ‘unremarked small failures’ that permeate businesses but cause no immediate harm. The authors say that people are often hardwired to misinterpret or ignore warnings that are embedded in these failures. And, perversely, such incidents can even be seen as signs that systems are resilient and that the big picture is fine.

Ominously, Tinsley, Dillon and Marsden say: “These seemingly innocuous events are often harbingers; if conditions shift slightly, or if luck does not intervene, a crisis erupts.”

In the study, Tinsley, Dillon and Marsden use the BP Gulf oil rig disaster as a good example of what can happen if the small stuff is ignored:

In April 2010, a gas blowout occurred on the Far Horizon rig killing 11 people, sinking the platform and triggering the worst oil spill in history. Day after day oil spewed into the Gulf of Mexico and the world could watch it live thanks to cameras located on the ocean bed. We now know that numerous poor decisions and dangerous conditions contributed to the catastrophe.

Drillers used too few ‘centralisers’ to position the pipe; the lubricated ‘drilling mud’ was removed too early; managers had misinterpreted vital test results that showed that hydrocarbons were seeping into the well; and BP was relying on an old version of the fail-safe blowout preventer. A piece of kit that proved anything but fail-safe.

Why were these early warning signs not acted upon by managers and executives? One answer might be that other Gulf of Mexico oil wells had experienced minor blowouts, but each near miss, rather than raise alarms, was taken as an indication that the situation was OK.

There were other pressures too. BP’s managers knew that the company was incurring huge overrun costs of up to $1 million a day on Deep Horizon, which could have contributed to their failure or unwillingness to recognise the warning signs.

Tinsley, Dillon and Marsden believe there are two reasons why most near misses are either ignored or misread:

The first is ‘the normalization of deviance’ the tendency over time to accept anomalies - particularly risky ones - as normal.

Diane Vaughan coined the phrase in her book ‘The Challenger Launch Decision’ to describe a culture that allowed a glaring mechanical anomaly on the space shuttle to gradually become a normal flight risk, which ultimately sent the crew to their deaths.

The second error is what Tinsley, Dillon and Marsden call the ‘outcome bias’. Put simply, when people observe successful outcomes they tend not to dig too deeply but, instead, focus on the success rather than on the complex and broken processes that lie beneath.

Other factors can also be in play. Sometimes when people see near misses or small disturbances they are reluctant to report the incident for fear of reprisals. A whistleblower’s lot can be a lonely and frightening one: which often means that problems go unreported because the culture of an organization militates against raising one’s head above the parapet.

But whistleblowers can prosper in unlikely situations. An enlisted seaman on a US aircraft carrier, very low on the food chain, not to mention the pay scale, discovered he had lost a tool on deck during a combat exercise. He knew this had the potential to cause a real catastrophe, as the wrench could get sucked into a jet engine, but he was also aware that owning up would halt the exercise and might lead to punishment. He reported the mistake, the exercise was stopped and all airborne flights were directed to land bases at considerable cost. What happened to the young man who caused this expensive mistake? Extraordinarily, rather than being punished he was commended by his commanding officer during a formal ceremony for having the bravery to own up.

Instilling in staff the importance of vigilance and the reporting of near misses can save a company from potentially devastating future consequences.


Taking everyday small incidents seriously is actually a vital aspect of business continuity and crisis management.

‘If it ain’t broke don’t fix it’ so the saying goes, but perhaps, just underneath the surface, there is a fatal problem that needs fixing: and just because something worked once doesn’t mean that it will again.

No one should take near misses as signs of sound decision-making when in fact they may be the early warning signs of a major disaster in the making.

Jim Preen is head of media services at Crisis Solutions and can be contacted at jim.preen@crisis-solutions.com.

•Date: 1st December 2011 • Region: UK/World •Type: Article • Topic: BC general

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here