SUBSCRIBE TO
CONTINUITY BRIEFING


Business continuity news

Never miss a news story: signup for our free weekly email newsletter.

REGIONAL PORTALS
Continuity Central currently offers three regional business continuity portals:
North America
United Kingdom
Asia Pacific / Australasia

In Hindsight - A compendium of Business Continuity case studies

Add to Google  

Use Google?
Click the button to add Continuity Central news to your Google home page
.

Follow us on Twitter  

Get immediate news
and information updates via our Twitter feed.

SUBMIT YOUR NEWS
To submit news stories to Continuity Central, e-mail the editor.

NEWSFEED
Want an RSS newsfeed for your website? Click here

OUR COOKIE POLICY
Before using this website ensure that you understand and accept our cookie policy. More details

Five baseline strategies for data center business continuity

By Paul Andersen.

Business continuity planning (BCP) should cover an organization’s ability to avoid major business disruption from a disaster while addressing the principal concerns of business risk mitigation, and protecting and preventing lost data. Business transactions delivered from the data center / centre pose major challenges to business continuity.

Data center infrastructure and the networks that support it play a prominent role in automating business processes and communication across the organization, customers, partners, suppliers and regulators to ensure the organization continues to run during a disaster. Connectivity in data center infrastructure and the networks can be adversely affected by bottlenecks or complete failure due to network outages, hardware failures, human error and natural disasters.

Application delivery controllers (ADCs) protect these vital corporate assets and keep the network up and running. Below are five capabilities to look for to create a reliable application delivery infrastructure for business continuity planning:

Server load balancing
Server load balancing ensures application availability, facilitates tighter application integration, and intelligently and adaptively load balances user traffic based on a suite of application metrics and health checks. It also load balances IPS/IDS devices and composite IP-based applications, and distributes HTTP(S) traffic based on headers and SSL certificate fields.

The primary function of server load balancing is to provide availability for applications running within traditional data centers, public cloud infrastructure or a private cloud. Should a server or other networking device become over-utilized or cease to function properly, the server load balancer redistributes traffic to healthy systems based on IT-defined parameters to ensure a seamless experience for end users.

In today’s highly virtualized environments, modern application delivery controllers not only ensure application availability and performance, they are also capable of integrating with virtualization management software to spin up and spin down virtual resources as needed to ensure availability of resources to meet the demands of end users.

Link load balancing
Link load balancing addresses WAN reliability by directing traffic to the best performing links. Should one link become inaccessible due to a bottleneck or outage, the ADC takes that link out of service, automatically directing traffic to other functioning links.

Where server load balancing provides availability and business continuity for applications and infrastructure running within the data center, link load balancing ensures uninterrupted connectivity from the data center to the Internet and telecommunications networks.

Link load balancing may be used to send traffic over whichever link or links prove to be most cost-effective for a given time period. What’s more, link load balancing may be used to direct select user groups and applications to specific links to ensure bandwidth and availability for business critical functions.

Global server load balancing
Geographical load balancing provides reliability between geographically dispersed data centers. ADCs redirect traffic to the best performing sites based on latency, site performance and user location. Global load balancing delivers high availability; if one site goes down, traffic will automatically redirect to other working sites.

Where server load balancing addresses availability for data center infrastructure and link load balancing addresses availability for data center connectivity, global server load balancing is concerned with the possibility that entire data center may be taken offline due to unforeseen circumstances and events beyond IT control.

These events may include natural disasters such as hurricanes, earthquakes and fires or downtime caused by attack or sabotage. Even if data centers are intact, they are often overloaded with increased traffic in the wake of business continuity events. In these circumstances, global server load balancing is able to distribute requests to less trafficked data centers in order to maintain business processes.

SSL acceleration
SSL transactions consume server CPU cycles due to intensive encryption and decryption of the packets on a repeated basis. ADCs offload SSL from servers, allowing them to focus on serving applications and content to end-users, improving availability and response times on the servers.

The National Institute of Standards and Technology (NIST) has mandated that businesses transition from 1028-bit SSL encryption to the more secure 2048-bit standard. Tests indicate that 2048-bit SSL encryption is 500 percent more resource intensive than 128-bit encryption. The result is that existing infrastructure will likely become bogged down supporting the new standard, impacting availability and the user experience for critical business processes.

Modern application delivery controllers support high-performance hardware acceleration for 2048-bit SSL encryption, often at prices equivalent to previous-generation 1024-bit encryption. Whether secure applications are running on dedicated servers in a traditional data center environment or on virtualized infrastructure in a public or private cloud, it is advantageous to offload process-intensive 2048-bit SSL encryption to dedicated hardware to provide the highest level of application security, availability and performance.

TCP acceleration
TCP acceleration offloads connections and sessions in several ways to optimize data flows and reduce the impact on servers, preventing them from being overloaded.

Mobile traffic is increasingly outpacing traditional network traffic. Mobile traffic also uses far more connections and opens and closes connections far more often than traditional network traffic. Over time, legacy data center equipment will be unable to keep pace and application availability will suffer.

TCP acceleration supported on modern application delivery controllers offloads connections from servers, handles a far greater number of concurrent connections and has the ability to handle far greater connections every second. Modern application delivery controllers also have the ability to multiplex thousands of client side connections into a much smaller number of larger server-side connections, greatly increasing efficiency and further improving the availability and performance enterprise applications and cloud services.

From ensuring the availability, performance and security of applications, servers and other devices within the data center, to ensuring reliable connectivity to global networks, to ensuring the availability of business processes under worst-case scenarios, application delivery controllers can and should play a significant role in an organization’s business continuity plan. Whether deployed in traditional data centers or in public or private clouds, modern application delivery controllers provide the features and performance needed to ensure business continuity while at the same time improving efficiencies, supporting next-generation standards and preparing organizations for growing trends in the areas of mobile and cloud computing.

About the Author
Paul Andersen is the Director of Marketing for Array Networks and is responsible for communications and product marketing for Array’s line of application, desktop and cloud service delivery solutions. Andersen has fifteen years of marketing experience in high technology in the areas of networking, security and application delivery. Prior to Array has held product marketing, partner marketing and marketing communications roles at Cisco Systems, Sun Microsystems and Tasman Networks. A graduate of San Jose State University, he holds a BA in and holds a bachelor’s degree in Marketing with minor in Technical Communications.

•Date:11th July 2013 • US/World •Type: Article • Topic: Data centers

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.
   

How to advertise How to advertise on Continuity Central.

BCM software

BCM software

Phoenix

Business continuity software

The Business Continuity and Resiliency Journal