WELCOME TO THE CONTINUITY CENTRAL ARCHIVE SITE

Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Should business continuity managers commit to a new ‘Always Be Testing’ routine?

By Larry Lang

Statistics have shown that most small to mid-sized businesses will experience at least one instance of system downtime a year. Once a year doesn't seem like much, but consider this: Aberdeen Group estimates that an hour of downtime costs a mid-sized business an average of $74,000. Then factor in results from a Harris Interactive survey, which found that IT managers estimate 30 hours on average for recovery.

Now that the cost has been put into perspective, are you sure your business can bounce back from even one instance of system downtime each year? Has your disaster recovery system been through regular real-world tests to find out? Unfortunately, only a small minority can respond to this last question in the affirmative: A 2011 survey found that only 28 percent of small to mid-sized businesses surveyed have even tested their backup at all.

New thinking is required when it comes to data system testing regimes

Many organizations simply aren’t aware of everything that can go wrong when recovering emergency backups, and if you never actually try to restore a file, application or server, you don't really know if you can. But because of the time and cost issues associated with tape, disk and cloud backup tests, IT professionals often resort to ‘workarounds’: they may perform a scaled-down version of a test in either a partial environment or in a partial format. For example, they may test only their organization's Exchange server, but not the SQL server. Or they might take a server down and bring up a virtual copy of it. The problem with these approaches is that testing is not done from ‘A to Z’ on a regular basis, with all data, on all servers, involving all hardware and components, and accounting for all changes in the entire environment: and these changes occur daily.

The answer is to commit your organization to weekly tests of the complete data disaster recovery system. Such a testing regime can help identify the molehills before they turn into mountains.

Performing weekly, real-world tests of your disaster recovery system doesn't have to be complex, costly, or a time and resources suck. Your disaster recovery solution choice has a lot to do with this. Hybrid cloud solutions, for example, offer on-demand and automatic testing that can be performed in minutes. Once reassured with a weekly test, organizations can feel confident that total system recovery is only minutes away should a disaster hit.

Another point to keep in mind is that it's not just the ability to test for data recovery that makes a hybrid cloud-based disaster recovery solution valuable. Rather, it's the solution's ability to recover applications and systems as well. Unfortunately, traditional tape and disk backup, or cloud backup alone, can't offer this.

Data redundancy issues

It’s been said that if your data doesn’t exist in more than one place, than it doesn’t exist at all and certainly there's a kernel of truth to this adage. For the most part, data is stored on disk drives. These mechanical devices are certain to fail at some point, so if your data is on only one, essentially you're in a race against time to copy it to another place (and another and another…) before that fateful moment.

But multiple copies are not enough. Too many backup copies are stored in an inert archive format, inaccessible for live testing with the application that uses it. For instance, a database is copied to a new location. But if that location is not set up to run the application that can access that database, how do you know it's really there; and really ready, really safe? That's why it is important to not just copy data, but to also completely replicate the entire server and applications, so that they're always easy to test and always ready to go.

Conclusion

All in all, using a disaster recovery solution that enables immediate recovery of data, applications and systems, along with frequent and regular system testing, is the only way to fully safeguard a company's revenue, customers and reputation.

One thing is for sure: you can never count on your disaster recovery system performing 100 percent of the time if you test it 0 percent of the time. Further, given all the changes or missteps on a network, you must be able to test weekly: even a tiny change to your IT system increases the odds that something might go wrong later on.

Still not convinced that weekly testing is important? Then ask yourself this question: Would you feel comfortable erasing your critical data right now, and restoring it from your backups? If not, it's time to commit to a new routine: ‘Always Be Testing.’

The author

Larry Lang is chief executive officer, Quorum. Quorum provides assured, one-click backup, recovery and continuity, helping businesses safeguard their revenue, customers and reputation. The award-winning Quorum series of appliance and Disaster-Recovery-as-a-Service (DRaaS) hybrid cloud solutions makes continuity a reality for small to mid-sized companies, letting them recover from any type of disaster within minutes and ensuring backup data is safe.

•Date: 7th August 2013 • World •Type: Article • Topic: BC testing and exercising

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.
   

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here