WELCOME TO THE CONTINUITY CENTRAL ARCHIVE SITE

Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Unexpected storage challenges in cloud computing environments

By Eric Burgener.

There are undeniable benefits to cloud computing: the elastic computing model it supports provides unprecedented flexibility to respond rapidly to changing business requirements, server and desktop consolidation can lead to much higher resource utilisation across your IT infrastructure, and can streamline common management tasks, improve data protection and recovery capabilities, and save on energy and floor space costs.

However, if you are starting to look at building your own/private cloud, there are a few things you should know up front about storage as these issues will affect your planning regardless of whether you are an enterprise looking to create a private cloud, or a provider looking to build the scalable infrastructure your clients will need.

Storage performance
Virtual computing technologies inevitably form the foundation of any cloud computing infrastructure; however the storage performance and cost formulas that you’ve been using to size and deploy storage configurations in physical server environments do not map well to virtual environments.

In fact, the performance of any storage configuration will slow anywhere from 50 percent to 200 percent or more when you move to a virtual environment, depending upon what storage technologies (FC, SAS, SATA, etc.) you are using. This is because the I/O patterns are very different between physical and virtual environments, and while legacy storage architectures do a good job of handling physical server I/O, they are much less effective at handling virtual server I/O.

Also, in deploying your environment back up strategy to meet your stated performance requirements, you will end up purchasing a significant amount of additional storage hardware, driving costs up. This problem is bad in virtual server environments, but it is even more serious in virtual desktop environments. The worse the problem is, the more it costs you to fix it by purchasing additional storage.

Thin provisioning
Thin provisioning is a valuable technology that allows you to use physical storage capacity much more efficiently. Most servers only use a small percentage of the storage that has been allocated for them up front, but once this storage is allocated, it is not available for use by anyone else. Thin provisioning is an approach which effectively allocates storage on demand in real time, so that storage is only consumed as it is actually needed to write to. The downside to this technology, however, is that unless you are buying high end, enterprise-class arrays, real time thin provisioning can add latencies which have an appreciable impact on the perceived performance you deliver to your end users.

Virtualization platforms from VMware, Microsoft, and Citrix offer native thin provisioning options, but these impose significant performance degradation. Generally you are given the alternative of a higher-performance storage device (a ‘thick’ or fully-allocated virtual disk) or one that uses storage very efficiently (a ‘thin’ virtual disk), but you can’t get both at the same time. This again has an appreciable cost impact: if you use thick virtual disks for performance (which is what most organizations choose), then you consume storage capacity at a rapid rate. If you use thin virtual disks on the other hand, you use storage very efficiently but have to invest in additional storage hardware to get back the performance you lost due to the latencies associated with thin provisioning.

Snapshot technologies
Disk-based snapshots have some very important uses in virtual environments. With them, you can efficiently handle provisioning tasks and implement data protection regimens that can help you deploy new desktops faster, and lower your recovery times and data loss on recovery. Most server virtualization vendors offer snapshot APIs to make this process easier.

There are three critical things to look for when evaluating snapshot technologies. First, do they interface with hypervisor-level APIs like Windows VSS (on Hyper-V), the vStorage API (on VMware), or the XenAPI (on XenServer)? This type of support helps to ensure that you create consistent, recoverable snapshots. Second, how many snapshots can your technology support? If you are implementing a virtual desktop project with 3,000 desktops, and your snapshot technology only supports 512 snapshots at a time, refreshing those desktops each time a new patch comes out can be problematic. And third, how well performing are your snapshots? The performance of most snapshot technologies tends to degrade as more snapshots are taken. Will you retain the performance you need as you scale to the requirements of your environment?

Shared storage
To fully leverage virtualization technologies, you’ll want to look closely at shared storage configurations. If the storage that supports your virtual servers resides on a centrally-shared pool of storage, you enable the ability to move virtual machines (VMs) around at will to meet high availability, workload balancing, and online maintenance requirements. However, the prices you pay for storage area network (SAN) arrays will likely be higher than you were paying before on a per GB basis if you were leveraging direct attach storage (DAS) or storage that was internal to your physical servers.

This problem is particularly acute in virtual desktop projects: if you are moving from physical desktops it’s likely that much of the storage was on very inexpensive IDE drives. When you consolidate your desktop storage onto SAN arrays, you may be paying up to one hundred times more on a per GB basis for that storage. While it is true that you have significant operational savings from this centralization, don’t let the added cost of moving your desktop storage to the SAN catch you by surprise.

The financial bottom line
These issues have important business implications as you start to architect your cloud environment, and there are many examples in the industry where projects have been limited, delayed, or derailed entirely because a customer runs out of storage budget before they can adequately address all of them. Cost/server or cost/desktop are key metrics when evaluating any cloud project, and these unexpected surprises have tripped up many an unsuspecting user. If you are an enterprise deploying a private cloud, your preliminary ROI calculations will take a big hit if they do not take these issues into account before you set budgets for your project. If you are a cloud provider, the attendant infrastructure costs can have a negative impact on the margins you’ll enjoy and/or your ability to price your offerings competitively.

A best practice when designing cloud environments would be to understand the performance and cost profile of the storage configurations you need to meet your requirements up front, and know going in that your experience from the physical server world is not directly applicable.

Author: Eric Burgener is VP of product management, Virsto Software

•Date: 6th May 2011 • Region: World •Type: Article •Topic: Cloud computing

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.
   

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here