IT disaster recovery, cloud computing and information security news

How to effectively test network performance and resilience

Not all test methods are suitable for ensuring the availability of services and applications. Areg Alimian explains why this is the case and looks at the best ways to use testing to ensure that the rollout of new applications and services is successful.

Rapid time-to-market is increasingly important in the rollout of new applications and services, or to put it in simpler terms:  everyone wants to be first. So new architectures are planned with virtual environments and hybrid clouds and implemented, only to find out that customers are complaining about service quality. Waiting for customers and users to complain is one of three basic ways to learn about the performance and resilience of your network, but certainly not the most promising. The second option is waiting for a hacker attack to paralyze your network, and that’s not popular either! The third option is called ‘testing’.

However, not all test methods are suitable for ensuring the availability of services and applications. Trying to validate performance and security without being realistic about application loads and attack techniques, quickly leads to a false sense of security. Only tests based on real-world expected load conditions – and beyond what you might expect – will give reliable information about how the network and security infrastructure behaves.

Start at the beginning

It’s forecast that by 2020, there will be about 50 billion devices connected to the Internet, 10 times more than there are today (1).  Many of these devices run complex applications that need to communicate with each other around the clock. This not only automatically generates more data, but also places greater demands on the performance and availability of networks. In particular, HD video, and social networking, combined with big data and IoT have a virtually unlimited hunger for bandwidth. 

Attacks are also getting bigger. In a report published in January 2016, the European Agency for Network and Information Security (ENISA) stated that the number of DDoS attacks with bandwidths over 100Gbps had doubled in 2015, and will continue to increase.

Meeting these growing demands on infrastructure requires a massive upgrade to the data center / centre, ranging from the migration of top-of-rack to server connectivity from 10 GbE to 25 GbE and 50GbE, to enhancing the core network with 100 GbE technology. The expected result of this type of upgrade is significantly higher data rates with approximately the same footprint and power consumption, as well as a higher server density and reduced cost per bandwidth unit. But what guarantees do enterprises have that these expectations will be achieved under real world conditions?

In addition, unique characteristics of network devices, storage, and security systems, coupled with the virtualization of resources, the integration of cloud computing, as well as SaaS, can significantly slow the introduction and delivery of new services. To ensure you get the throughput needed to deliver new services anytime, anywhere, requires infrastructure tests that go above and beyond standard performance tests of individual components.

Customers and internal stakeholders do not care how many packets a web application firewall can inspect per second.  They only care about the application response time, which depends on a number of factors.  These include the individual systems in the network and their interaction, the application specific protocols and traffic patterns, as well as the location, and time of day, of the security architecture.  So it’s imperative to test the entire delivery path of an application – end to end – under realistic conditions. This means using a mix of applications and traffic workloads that recreates even the lowest layer protocols. Simple, standardized tests such as IO meters in complex environments are not enough.

Testing under real conditions

Enterprise data centers need a test environment that reflects their real load and actual traffic, including all applications and protocols, such as Facebook, Skype, Amazon EC2 / S3, SQL, SAP, Oracle, HTTP or IPSEC.  It’s meaningless, and dangerous, to test a data center infrastructure with 200 Gbps of data, when the live network experiences peak loads of over 500 Gbps. Additionally, when testing, consider illegitimate traffic including increasingly frequent DDoS and synchronized attacks on multithreaded systems.  Since attack patterns are constantly changing, timely and continuous tests are crucial.  One way to ensure the consistency and timeliness of testing is to leverage an external service that can analyze current attack patterns and update the test environment continuously and automatically.

Testing complex storage workloads can only be achieved with real traffic. Cache utilization, deduplication, compression, as well as backup and recovery, must be tested with all protocols used -SMB2.1 / 3.0, NFS, CIFS, CDMI or iSCSI - and optionally tuned to ensure compliance with defined service levels.

While the need for stringent testing is obvious for a new data center, it’s equally important when consolidating or integrating hybrid clouds.  This is because each new application, and even updates and patches of existing applications, can significantly alter the performance and response times of the network.

DIY or TaaS?

In addition to the development and testing of a network infrastructure, it’s equally important to develop a qualified test team.  Enterprises do not typically hire dedicated test engineers, and network and security architects are not always proficient in designing and executing comprehensive tests to ensure their applications and IT systems can handle big loads and sophisticated attacks.

As such, external TaaS (testing as a service) offerings can be a useful addition to an in-house solution, especially for larger projects. An external service provider can help determine which systems are the best fit within an existing environment, or before the rollout of a new demanding application such as online gaming. 

So the choices are simple: wait for customer complaints to learn about the performance and resilience of your network; wait for a hacker attack to paralyze your network; or put your network and applications to the ‘real’ test with solutions and offerings that replicate your specific load requirements. It’s a no brainer...

The author

Areg Alimian is senior director, Solutions Marketing, Ixia.

Reference

(1) http://www.smartgridnews.com/story/50-billion-connected-iot-devices-2020/2015-04-21


Want news and features emailed to you?

Signup to our free newsletters and never miss a story.

A website you can trust

The entire Continuity Central website is scanned daily by Sucuri to ensure that no malware exists within the site. This means that you can browse with complete confidence.

Business continuity?

Business continuity can be defined as 'the processes, procedures, decisions and activities to ensure that an organization can continue to function through an operational interruption'. Read more about the basics of business continuity here.

Get the latest news and information sent to you by email

Continuity Central provides a number of free newsletters which are distributed by email. To subscribe click here.