Leveraging virtual machines for business continuity

Get free weekly news by e-mailJason Buffington, CBCP and Microsoft MVP, provides an overview of virtualization and its business continuity uses.

Typically, discussions around server virtualization seem to focus on consolidation, easier management, and facilitation of older applications/operating systems on newer platforms. However, there are some interesting and, in fact, exciting business continuity solutions that are empowered through server virtualization. Specifically, by combining virtual machines with a data protection/replication technology (which is the heart of most business continuity approaches), one can:

* Provide more complete high availability. While replication/availability solutions can certainly protect almost any application by itself, flawlessly, there are limitations when considering ‘many to one’ availability configurations, particularly when multiple competing applications are involved.

* Enable more cost-effective disaster recovery. While replication technologies can absolutely deliver data to a scaled back remote site, virtual machines can reduce the secondary infrastructure even more.

* Create autonomy within a redundant infrastructure. For large corporations that wish to keep one business units resources separate from another, as well as for regional hot site providers wanting to keep one client customer from another, virtual machines allow multiple customers to leverage shared hardware without sacrificing security.

This article will explore these three business continuity strategies that are all based on enhancing one's data protection / replication / availability solution with a virtual infrastructure.

Replication software, such as Double-Take from NSI, has for years provided availability solutions for critical applications like Microsoft SQL and Exchange, as well as file services and such. In the case of file services, a true ‘many to one’ solution is achievable out-of-the-box. Simply put, the target server can assume multiple machine names, IP addresses, and file shares; delivering a very cost effective and scalable availability solution. Even in the case of Microsoft SQL (2000 and 2005), ‘few to one’ is easily deliverable. But some applications don't mix.

For example, one would not configure a SQL Server, an Oracle server, and a Lotus server to fail over to a common target. As a basic rule of thumb, if the applications would not peacefully coexist on a production server, then they will not peacefully coexist on the target. Similarly, one might have a new Exchange 2003 server and an older Exchange 5.5 machine; again because the applications cannot coexist on a common target, one's availability choices might be limited. This is where virtual machines add value to a consolidated high availability solution.

Using the last example, a physical server that is intended to provide high-availability might be configured with two virtual machines; one to protect the Exchange 2003 source and another to protect the Exchange 5.5 machine. This allows differing production servers with perhaps conflicting applications to leverage a common availability platform.

There is a variation of that approach that is also interesting, if one has already chosen to deploy virtual machines in the production environment. If one already has a virtual machine with a properly configured operating system, application and data set, then the entire production server is neatly ‘packaged’ into a few albeit large files that represent the virtualized disk drives. By configuring replication technology under the virtual machine, meaning that it runs from the ‘host’ OS and not from within the ‘guest’ OS, one can actually replicate the literal disk drives that make up the production server. At this point, providing availability for the virtual machine is simply a matter of starting the virtual machine from the redundant set of ‘disks’ (which again, are really just large files). This is a symbiotic solution meaning and that while one must add virtualization on top of replication, the solution is dependent the other way as well. Server virtualization products like Microsoft Virtual Server 2005 and VMware GSX cannot deliver availability of virtual machines between disconnected hosts without an efficient byte‑level replication technology.

In either configuration, whether protecting production servers to autonomous virtual targets, or replicating the entire virtual machine between physical platforms, it is the combination of server virtualization and replication/availability technologies that offers perhaps the most scalable and most flexible availability solution achievable today.

By definition, a "recovery" infrastructure is "redundant". It is for this reason that disaster recovery projects either die under "no budget" or are reduced to only the smallest subset of servers that can "justify" in the redundancy. The quotations in the earlier sentences are not meant just for fact but to highlight the common thread and misunderstanding that disaster recovery has exorbitant cost.

First, one should realize that it does not cost a lot to protect a little. Smaller infrastructures spend less to protect themselves then larger infrastructures do. But in all cases, one will spend less on the redundant resources. Replication technology, particularly host-based software, provides even more cost savings by protecting multiple sources to fewer targets. To protect 100 servers, one might only need 20 targets.

Leveraging virtual machine technology provides even more cost savings potential. For example, if delivering disaster recovery for multiple Windows domains (within a large corporation or from multiple separate clients), one would normally need a dedicated domain controller for each domain. This is an easy function to offload to a virtual machine, though saving hardware costs as well as space considerations and maintenance costs. Also, in the same spirit as the earlier availability discussion, server virtualization allows conflicting production resources to share the cost of a common disaster recovery platform. By protecting/replicating from the host OS and not from within a virtual machine, one might protect multiple vendors’ databases to a common target, or even protect Linux and other non-Windows virtual machines to a DR site. This last configuration is especially key for emerging and non-traditional operating systems that do not have reliable replication technologies otherwise available to them.

Combining the two earlier solution approaches and recognizing the autonomy of virtual machines, it is easy to foresee a service-oriented business model whereby regional systems integrators might offer hot site services for their clientele. Certainly in the SMB sector where one might assume that a tier 1 disaster recovery site is not achievable, this idea has appeal. After all, since SMB businesses do not have dedicated IT experts, their first phone call after a crisis is likely to be their reseller partner anyway. Why shouldn’t the reseller partner be the host for disaster recovery?

In the past, an outsourced hot site might have only been feasible for the largest of companies, in part due to the hardware expenditure related to supporting a vast clientele. And while certainly there are numerous other compelling reasons for outsourced recovery services, there is also admittedly a class of customer that cannot justify that expenditure.

By implementing virtual machines, a regional integrator or reseller can alleviate much of the hardware cost burden by sharing each physical asset across multiple paying customers. One typically could not deliver that solution because of the concern that Client-A might somehow gain access to information from Client-B. With virtual machines, each client can send its data to ‘dedicated’ target server(s). This ensures privacy as well as flexibility. In fact, if the hot site provider needs to do maintenance on a target platform, one simply utilizes the availability solution described earlier to move the virtual ‘target’ machine to an alternate physical resource, while still maintaining service level agreements (SLA) with each client. And as a particular client’s usage increases, the hot site provider simply migrates the virtual ‘target’ machine from being shared among other small clients, to a dedicated client platform, and later across multiple physical platforms. But in this case, the hardware expenditure is driven by the clients’ billable use and not the historical requirement for redundant isolated hardware.

It is important to recognize that every business continuity solution for IT begins with a reliable means of protecting the data. All subsequent capabilities, whether high availability, disaster recovery, or even service model arrangements must first have adequate data protection/replication. Adding server virtualization to these configurations increases flexibility and often lowers costs, sometimes creating new solutions altogether.

About the Author:
Jason Buffington has been working in the networking industry since 1989, with a majority of that time being focused on data protection. He is a Certified Business Continuity Planner and was awarded as one of Microsoft’s Most Valued Professionals in Storage for 2005. Jason currently serves as the director of business continuity for NSI Software, enabling high availability and disaster recovery via replication software.


Date: 2nd Dec 2005 •Region: US/World •Type: Article •Topic: IT continuity
Rate this article or make a comment - click here

Copyright 2010 Portal Publishing LtdPrivacy policyContact usSite mapNavigation help