Five baseline strategies for data center business continuity
By Paul Andersen.
Business continuity planning (BCP) should cover an organization’s ability to avoid major business disruption from a disaster while addressing the principal concerns of business risk mitigation, and protecting and preventing lost data. Business transactions delivered from the data center / centre pose major challenges to business continuity.
Data center infrastructure and the networks that support it play a prominent role in automating business processes and communication across the organization, customers, partners, suppliers and regulators to ensure the organization continues to run during a disaster. Connectivity in data center infrastructure and the networks can be adversely affected by bottlenecks or complete failure due to network outages, hardware failures, human error and natural disasters.
Application delivery controllers (ADCs) protect these vital corporate assets and keep the network up and running. Below are five capabilities to look for to create a reliable application delivery infrastructure for business continuity planning:
Server load balancing
The primary function of server load balancing is to provide availability for applications running within traditional data centers, public cloud infrastructure or a private cloud. Should a server or other networking device become over-utilized or cease to function properly, the server load balancer redistributes traffic to healthy systems based on IT-defined parameters to ensure a seamless experience for end users.
In today’s highly virtualized environments, modern application delivery controllers not only ensure application availability and performance, they are also capable of integrating with virtualization management software to spin up and spin down virtual resources as needed to ensure availability of resources to meet the demands of end users.
Link load balancing
Where server load balancing provides availability and business continuity for applications and infrastructure running within the data center, link load balancing ensures uninterrupted connectivity from the data center to the Internet and telecommunications networks.
Link load balancing may be used to send traffic over whichever link or links prove to be most cost-effective for a given time period. What’s more, link load balancing may be used to direct select user groups and applications to specific links to ensure bandwidth and availability for business critical functions.
Global server load balancing
Where server load balancing addresses availability for data center infrastructure and link load balancing addresses availability for data center connectivity, global server load balancing is concerned with the possibility that entire data center may be taken offline due to unforeseen circumstances and events beyond IT control.
These events may include natural disasters such as hurricanes, earthquakes and fires or downtime caused by attack or sabotage. Even if data centers are intact, they are often overloaded with increased traffic in the wake of business continuity events. In these circumstances, global server load balancing is able to distribute requests to less trafficked data centers in order to maintain business processes.
The National Institute of Standards and Technology (NIST) has mandated that businesses transition from 1028-bit SSL encryption to the more secure 2048-bit standard. Tests indicate that 2048-bit SSL encryption is 500 percent more resource intensive than 128-bit encryption. The result is that existing infrastructure will likely become bogged down supporting the new standard, impacting availability and the user experience for critical business processes.
Modern application delivery controllers support high-performance hardware acceleration for 2048-bit SSL encryption, often at prices equivalent to previous-generation 1024-bit encryption. Whether secure applications are running on dedicated servers in a traditional data center environment or on virtualized infrastructure in a public or private cloud, it is advantageous to offload process-intensive 2048-bit SSL encryption to dedicated hardware to provide the highest level of application security, availability and performance.
Mobile traffic is increasingly outpacing traditional network traffic. Mobile traffic also uses far more connections and opens and closes connections far more often than traditional network traffic. Over time, legacy data center equipment will be unable to keep pace and application availability will suffer.
TCP acceleration supported on modern application delivery controllers offloads connections from servers, handles a far greater number of concurrent connections and has the ability to handle far greater connections every second. Modern application delivery controllers also have the ability to multiplex thousands of client side connections into a much smaller number of larger server-side connections, greatly increasing efficiency and further improving the availability and performance enterprise applications and cloud services.
From ensuring the availability, performance and security of applications, servers and other devices within the data center, to ensuring reliable connectivity to global networks, to ensuring the availability of business processes under worst-case scenarios, application delivery controllers can and should play a significant role in an organization’s business continuity plan. Whether deployed in traditional data centers or in public or private clouds, modern application delivery controllers provide the features and performance needed to ensure business continuity while at the same time improving efficiencies, supporting next-generation standards and preparing organizations for growing trends in the areas of mobile and cloud computing.
About the Author
•Date:11th July 2013 • US/World •Type: Article • Topic: Data centers