Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Flying blind at 100G

By Tim Nichols.

A recent survey conducted by Endace of 150 US telcos, CDNs and educational establishments indicated that nearly 70 percent have either already deployed, or are planning to deploy either 40G or 100G uplinks inside their data centers/centres within the next 12 months.

These industries have historically always been at the leading edge of the bandwidth curve and it should be no surprise to see them pushing the envelope again. But 100G isn’t just for telecoms. For any large organization planning to build out a new data center the argument for deploying 100G or 40G is becoming increasingly difficult to dismiss, even with the relatively high price per port that infrastructure vendors are charging.

The economics of 100G networking are very compelling when contrasted to 10 Gbps networking technologies. Over optical fibre a 10 Gbps link consumes a whole glass strand and uses one connector; 100G on the other hand, delivered using SR-type 10 Gbps optics, consumes 10 whole strands bonded together but only one connector making it much cheaper. LR/ER type 100G technology is even more efficient, consuming one strand of glass and one connector, making it ten times more efficient again.

Unfortunately, deploying 100G and living with 100G are two different things. From an IT ops perspective, 100G network segments are just like any other uplink; they need to be monitored, analyzed and recorded so that issues can be detected and investigated before end users start to complain. But that’s where the wheels start to come off the 100Gbps train.

Unlike 10 Gbps network segments which can be matched to a 10 Gbps monitoring port on an IDS or analytics platform there is no such thing as a 100G monitoring system. The dirty truth here is that the monitoring industry has been caught napping and as a result anyone operating a 100G network is officially flying blind.

It’s not the first time this situation has arisen; back in 2008 when 10 Gbps arrived with a vengeance most organizations only had 1 Gbps monitoring infrastructure and IT ops teams faced a similar problem. The answer back in 2008 came in the form of a layer 1 matrix switch, which Gartner tidily named Network Packet Brokers. NPBs solved the problem almost overnight by ingesting 10 Gbps of traffic and load balancing it out over multiple 1 Gbps ports which could be connected to existing 1 Gbps capable infrastructure. Over the last five years this market has evolved to become a $250m industry helping organizations access, filter, load balance and duplicate their 10 Gbps and 1 Gbps network traffic.

So, why hasn’t history repeated itself in the 100G space? It’s a good question and there’s a number of possible reasons – the most probable being the limitations of the merchant silicon that underpins most of these systems and the challenges of handling 100G of traffic without losing any of it.

Author: Tim Nichols, vice president of Marketing, Endace. Endace is one of the few vendors to have stepped up to the 100G challenge, with its EndaceAccess product line launched in late 2012. www.endace.com

Endace Europe Ltd is exhibiting at Infosecurity Europe 2013, held on 23rd–25th April 2013 at Earl’s Court, London. The event provides a free education programme plus exhibitors showcasing new and emerging technologies and offering practical and professional expertise. www.infosec.co.uk

•Date: 7th March 2013 • World •Type: Article • Topic: Data centers

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here