WELCOME TO THE CONTINUITY CENTRAL ARCHIVE SITE

Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Building a storage-efficient data center

By Len Rosenthal.

It seems that every company is planning to build a new data center over the next year or two. They claim to be running out of floor space and/or power to run, cool and support the hundreds or thousands of servers housing petabytes of data in their new virtualized IT infrastructures.

It’s funny, even with the widespread deployment of virtualization technology and the resulting consolidation of resources, companies are still buying more servers and associated networking infrastructure every year. Fortunately, the growth in servers (14 percent in latest Gartner report) is small compared to the growth in data which, according to Gartner, is growing in excess of 50 percent per year. As a result, as more data centers are built, the largest consumer of floor space, power and cooling will be data storage systems and the associated networking. But what can be done to control this runaway growth in capital and operating expenses that’s being driven by this data explosion? A lot actually; with the right combination of tools and processes, storage and associated expenses can be cut by 50 percent or even more.

Let’s start with the storage networking costs. In enterprise data centers, the vast majority of storage is connected to servers via Fiber Channel (FC) switching because of the proven reliability and performance advantages of this technology. IT managers believe that FC storage area networks (SANs) are expensive but they are mistaken. The reality is that they can be expensive but they don’t have to be, provided end users can see how these FC switches are being used, both in real-time and from a historical analysis. This view is enabled by monitoring hardware and software and is often called traffic analysis. While traffic analysis has been commonly used on LANs and WANs for over a decade, surprisingly, it is rarely used on SANs. Today the typical utilization rate of SAN switches is below 5 percent; this means that end users are over-provisioning FC SAN switching significantly and therefore wasting financial and hardware resources.

So does that mean that IT managers are buying 20X more switching capacity than they need and could cut their FC SAN switching costs by 95 percent? Not exactly. Many ports experience performance spikes and it’s a SAN architecture best practice to design in some headroom for these surges. Nevertheless, by analyzing the historical traffic patterns it is possible to forecast and plan for these peaks. Just as VMware enabled end users to consolidate all of those 10-20 percent utilized servers, the opportunity for SAN consolidation is even greater. With access to their SAN I/O traffic data, as IT managers plan their next data centre, they should be able to buy no more than 50 percent of what they normally would. Of course, what they were planning to purchase was probably what their switch vendor was proposing, so all I can say is ‘Buyer beware’. The opportunity to reduce FC SAN switching costs by 50 percent or more is there to immediately exploit.

Now for the bigger savings. Undoubtedly, expenses related to the actual data storage systems are at least five times higher than the storage networking expenses. So what can be done to lower the storage costs? By now, most IT organizations have already deployed some form of storage tiering, thin provisioning and data deduplication. These are all good investments and should be continued, but there is an even bigger savings opportunity to be realized; this new approach is called performance-based tiering and it can have a tremendous effect on storage deployments and costs. Nearly all existing tiering solutions base their decisions on frequency of data access or IOPs on the drives themselves; these are certainly valuable approaches, but they miss the bigger opportunity. By analyzing application performance IT administrators can tier data accordingly. All applications have response time goals or SLAs. Although many IT departments have application performance monitoring (APM) tools in place, these can’t determine the latency effect of the SAN on response time. Real-time SAN monitoring solutions allow end users to immediately determine the latency effects of the storage and SAN on a per-application basis. This can have a profound effect on what, where and how storage is then deployed. With business-critical applications such as those based on Oracle, SAP or DB2, most IT departments default to expensive, high-performance, power hungry tier 1 storage based on 15K RPM drives in a RAID 10 (mirrored) configuration thinking that these systems are necessary because these applications are I/O-intensive. In addition, application vendors always recommend the deployment of the highest performance storage available so that they never have to be blamed for performance problems.

The stark reality is that if we measure SAN I/O response time per application, response times can be just as good on 10K RPM SAS drives or even 7200 RPM SATA drives in a RAID 5 configuration. Of course this is not true for all applications, but our studies of hundreds of enterprise applications have shown that anywhere between 50 and 70 percent of applications could be deployed with lower-tiered storage with no meaningful decrease in response time. Again, the key is having the right performance monitoring and analysis solution to measure latency in real-time and on a historical basis. If we look at the difference in both acquisition and operating costs (floor space, power, cooling, maintenance) between a similar amount of data stored in 15K RPM-based storage arrays in a RAID 10 configuration compared to 7200 SATA-based arrays in a RAID 5 configuration, the difference is staggering, nearly 70 percent lower. Doing the math, if we can cut storage costs by 70 percent on the storage associated with 70 percent of applications, we can save nearly 50 percent on total storage costs. That’s between $5K and $10K saved per every TB of storage deployed.

Taken together, the opportunity to cut SAN and storage costs by 50 percent is very much within reach of many end users. As mentioned before, the key is having a clear view of storage resource utilization, processes to extract pivotal data on what’s happening in the SAN infrastructure, and knowing what to do with this data.

Author: Len Rosenthal, VP of marketing, Virtual Instruments

•Date: 17th Feb 2011 • Region: US/World •Type: Article •Topic: IT continuity

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.
   

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here