WELCOME TO THE CONTINUITY CENTRAL ARCHIVE SITE

Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Backup is broken: and here’s the fix

By Alistair Forbes

Backup seems simple: take the important files that you need, and make sure that they are duplicated in such a way that they can be recovered.

It’s clearly a necessity. Our own personal PCs nag us if we’re failing to backup our data, and everyone knows a dire tale of woe of failing to backup - and companies such as ma.gnolia that collapsed after losing all its customers’ data.

And yet simply having a backup isn’t enough: backup success rates today are between 75 and 85 percent. In some sectors, only three-quarters of backup recoveries are successful. The rest, despite having a backup solution in place, were only able to recover some, if any, of their data (The Broken State of Backup, Gartner, 2014).

Why?

Big vs small

With the budget to choose from the latest high-end networks and devices, larger corporations are able to provide peace of mind that data will be available and recoverable. Multi-tiered systems and additional space are integral components to the backup and recovery plan, and increasingly the most successful plans are also incorporating cloud to provide an added layer of protection.

But for small to mid-sized organizations, non-profits and educational institutions, the funding and support for the latest backup and recovery technologies can be harder to justify. Keeping up with growing data and storage demands can mean significant ongoing investments in equipment, software, policies and personnel. To make up for limited resources, backup plans are often constructed from separate pieces in an attempt to bridge outdated systems to current data demands.

The result is uncertainty about whether the backup and recovery plan will be effective, and organizations may have no way of knowing until a recovery event occurs.

Why backup is broken

Despite evolving requirements and technology, many organizations are attempting to make old backup models meet new challenges. But what once worked to periodically back up data can no longer keep up with the advanced and constant flow of data that businesses are tasked with protecting today.

The Gartner presentation on The Broken State of Backup referenced above also pointed out that we can no longer ‘shut off’ data. Rather, organizations are dealing with a constant inflow of information that requires management on an ongoing basis.

There are several areas that contribute to the broken state of backup:

  • Lack of consistent testing and verification
  • Backup failures
  • Age and deterioration of media
  • Technology obsolescence.

Incomplete backups, the need to restore numerous incremental backups, user error and software or hardware failure all contribute to the failure to recover old data being between 15 and 25 percent. Reliance on old systems and blind faith in their ability to recover data means backup failures can be catastrophic for a business.

Too often, business rely on backup and recovery as a failsafe, and while the hope and goal is to provide those benefits, the lax attitude and overly confident reliance on it has made users less careful about protecting data on an ongoing basis.

The obstacles to efficient backup

Despite investments in the tools and technology that should ensure data protection, many businesses are creating their own obstacles to efficient backup.

Relying on a ‘single tier’: because backup is often out of sight and out of mind until an emergency occurs, organizations often wait long stretches of time between backing up data and even longer between test restores, if they do them at all. This means data and files are at risk of becoming outdated by the time the backup occurs or the recovery is needed. Relying on a single tier of backup is the equivalent of putting all your eggs into one basket. Without added tiers of protection, businesses are counting on the backup and recovery being both available and successful, and leaving themselves with no options if an error occurs. Redundancy is key to providing protection to data.

Technology integrations: using multiple technologies in tandem does offer the promise of providing added security, but only if those systems integrate and communicate seamlessly. When virtual machines are brought into the mix, this integration becomes even more important.

Virtual backup: Even with the dramatic increases in the volume and diversity of business data, many businesses make investments into external disk storage such as storage area networks (SAN) that provide only an on-site recovery solution. Replicating this infrastructure to provide off-site protection can be costly. Instead, what’s needed is a shift toward an approach that includes the cloud. Pairing on-site and hosted capabilities both managed using the latest generation of software can dramatically improve the speed of backup and ability to serve customers’ backup needs. The unknown or new challenges of backing up virtual machines can act as another obstacle for business as they navigate whether to mirror, share or copy virtual files.

The right availability mix: how readily available should the data be, and at what speed can or should it be recovered? Finding the right balance between instant restore for fast access to data, and freeing up usable space can be tricky for any organization, and particularly those new to navigating the potential of hybrid backup models.

Compliance: regulations such as the UK Data Protection Act and PCI have always, and will continue to, create challenges and obstacles for IT broadly, and backup is no exception. While changes to compliance standards are necessary and inevitable, many organizations are ill equipped to keep up with the rapid pace of those changes, and often lack the right infrastructure to adapt adequately.

The restoration revolution

Whether data is lost due to a catastrophic event or simply user error, we need better systems in place to plan for these potential issues and the inevitable need for recovery. In the 15-25 percent of backups that don’t work, the issues that caused the backup to fail were likely not discoverable before the backup was attempted. It’s time to approach backup differently: from an ongoing perspective.

The cloud may seem to be just an overused buzzword at times, but it’s now a major part of IT and it can’t be ignored. Integrating cloud storage with intelligent backup software means data can be prioritised, categorised and backed up/recovered in the most efficient way using either local copies or cloud-based storage as appropriate.

In a recovery situation, data that has been classified as high-priority from the cloud can be addressed almost immediately, ensuring that critical operations are restored first and quickly. Lower priority files and data can be addressed as makes sense financially and time-wise.

Revamping backup strategies

Fast, secure and reliable backup is a business-critical function. For organizations large and small, finding the right combination of tools and support to provide backup-as-a-service (BaaS) in a cloud-based environment should be a priority.

Revamping a backup investment strategy can seem daunting, but when evaluating options, business should weigh the following:

  • Increased efficiency: how does the renewed strategy help to eliminate ineffective uses of resources, whether time, personnel or storage space?
  • Scalability: how flexible is the backup and recovery approach? Does it allow for varying levels of recovery speed, storage and prioritisation?
  • Successful backup: consider the success rates of the overall solution and strategy in regard to backup as well as restores.
  • Easy management and upgrades: unless a team is equipped to effectively manage and upgrade a backup solution, the promise of its value is meaningless. Tools should be easily integrated into existing programs and work well with virtual and local components.

An increasingly attractive approach to backup and recovery is a hybrid cloud backup, or disk-to-disk-to-cloud (D2D2C), which uses both on-site and cloud-based storage mechanisms seamlessly in a way that offers a long-term strategy for reliable backup and recovery. This approach offers comprehensive and integrated backup and recovery capabilities without the need to buy and manage the actual infrastructure of a second tier.

With the right mix of software, D2D2C allows users to backup data on a local disk, as well as to the cloud. The data can be stored on a network-attached device and can easily be accessed locally. When recovery is needed, this advanced approach can help organizations determine the best place, either locally or remotely, to restore the data.

It’s no longer enough to feel secure because everything important was saved to a back-up drive sometime in the last month. Innovative strategies are driving a change in the way businesses view the protection of data.

The author
Alistair Forbes is general manager of LogicNow.

•Date: 3rd November 2014 • UK/World •Type: Article • Topic: ICT continuity

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.
   

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here