IT disaster recovery, cloud computing and information security news

By Johan Pellicaan

Disaster recovery is imperative for the survival of any organization and whilst many businesses have some form of strategy in place, they need to ensure they can maintain continuity should a disaster strike. The concept is not new, so why are we still seeing many organizations fall victim to downtime?

Is it through lack of preparation, simply investing in the wrong solutions or is disaster recovery not a priority? Whatever the reason, organizations are still suffering from avoidable outages, an issue which needs to be addressed. Downtime can have a detrimental impact on businesses of any size and every minute is critical to a company’s future, not just in terms of cost but reputation. For small and medium sized businesses in particular, the damage could be so significant that they are not able to recover. Although it's hard to calculate the true cost of downtime, the Ponemon Institute estimated the average is $9000 per minute. These costs alongside reputational damage, more often than not, can be avoided. 

Disaster recovery

Disaster recovery (DR) asks the question, “how can an organisation survive and respond to a wide variety of threats ranging from small hiccups to catastrophic destruction?” The threats to ongoing operations range from human error and malicious attacks, to natural disasters.

Organizations are all aware of what disaster recovery is and the need to be able to recover data, but many businesses are only protected against a fraction of the threats, leaving them fundamentally vulnerable. Due to the ever-changing advancements in technology, new and critical threats are appearing every year and are growing in sophistication. Organizations need to get ahead of this and constantly consider if their entire IT infrastructure can maintain continuity. In today's modern world, businesses need to prepare in ways that involve both human and technological response.

The evolution of data backup

The concept of backup has evolved over the decades to overlap with more modern snapshot and replication technologies. The days of taking traditional full and incremental backups should be over. This was time consuming and inefficient; instead businesses should implement recovery strategies that combine per VM snapshot scheduling with replication, failover and recovery. With built in VM snapshot replication, organizations can save time allowing IT admins to focus on the day-to-day running of business operations. Like traditional incremental backups, snapshots only capture data that has changed since the last snapshot, making them highly efficient for storage and enabling flexible scheduling. It is important to know what level of protection each workload in your organization needs. With flexible snapshots, organizations can tier how often each data set is replicated, therefore only replicating data as and when required, reducing the burden on storage capacity. This also allows organizations to prioritise data, recognising some as being more critical to operations than others.

Recovering critical data

Snapshots alone do not make a backup, even though they are extremely useful for the local recovery of data from a number of operational disasters. For a true backup strategy, snapshots must be replicated onto another device, preferably at another site. This adds an additional layer of protection and will allow the business to be up and running again even in the event of on-site damage.

After a natural disaster, hardware failure or even a fire - with data stored off site, organizations can recover all critical workloads. With security breaches dominating the headlines it’s understandable that organizations are protecting their data from outside threats - but one of the highest causes of downtime is human error. With data replicated offsite this minimises the impact of any onsite damage as data can be pulled back from a secondary site.

How quickly can you recover? Failover is key

So, now you can recover critical applications and data - what is next? Arguably, just as important as the ability to recover is the time it takes. As soon as an organization suffers downtime they are up against the clock, every second counts. Not only do businesses suffer financial losses but reputational damage, which can impact revenues for years to come. This is where failover comes in, an element which traditional disaster recovery methods lack.

Organizations need to be prepared for disasters where quick recovery is not an option, such as a sustained power outage or natural disaster - which can cause damage to physical appliances within the data centre / center. Businesses need a simple failover strategy built into their overarching disaster recovery plan. With a failover process in place that enables clone replication and the ability to redirect data between sites, businesses can maintain operations within a matter of minutes, providing a solid recovery time objective (RTO).

Individual file level recovery

While failover is also considered as recovery of an entire VM, there are cases where it is neither desirable nor practical to failover a VM. For example, if an individual file or file server is needed rather than the entire VM. This will often be the case if a file has been corrupted or accidently deleted. Organizations don’t want to waste time recovering the entire VM, so more often than not, employees will have to start that document all over again. But with individual file level recovery, data can indeed be recovered either on an individual file basis or as an entire VM - as the term suggests. 

With built-in failover and individual file level recovery, combined with replication and snapshots, organizations can create a complete disaster recovery strategy - a plan that not only allows critical data to be recovered, but it can be achieved within a matter of minutes.

We all understand disaster recovery but it’s time to consider if your infrastructure can avoid downtime. People used to associate disaster recovery with enormous, dramatic natural disasters or rare events such as an office fire. But the definition has shifted, disasters are commonplace, widespread and more of a reality for every connected organization. 

The author

Johan Pellicaan is MD and VP EMEA at Scale Computing.

Want news and features emailed to you?

Signup to our free newsletters and never miss a story.

A website you can trust

The entire Continuity Central website is scanned daily by Sucuri to ensure that no malware exists within the site. This means that you can browse with complete confidence.

Business continuity?

Business continuity can be defined as 'the processes, procedures, decisions and activities to ensure that an organization can continue to function through an operational interruption'. Read more about the basics of business continuity here.

Get the latest news and information sent to you by email

Continuity Central provides a number of free newsletters which are distributed by email. To subscribe click here.