Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Don’t wait to automate…

How security policy orchestration software can help reduce downtime in hybrid environments.

By Reuven Harrison.

In our global, 24/7, online world, the individuals and organizations we deal with increasingly expect – and often rely on – our systems and applications being available at all times. When disaster strikes and downtime hits (whether through error, misfortune or malice), it can damage both an organization’s reputation and its bottom line. The companies you’re trusting to store and handle valuable information securely, or to access to the applications and services must do all they can to minimise the risk of breaches and downtime.

While stories about hackers and viruses breaking into (or bringing down) systems tend to prompt the biggest headlines, those of us in IT know that more downtime is due to network configuration errors than to security breaches. Because today’s networks are so complicated, and the pace and volume of changes is so great, it’s not uncommon for rushed-off-their-feet IT staff to make occasional configuration errors – and that could mean downtime for an application, service or even an entire business.

Frequent manual changes also increase the risk of compromising security and compliance. When you’re continually providing people with connectivity to particular applications and services in a complicated network set-up, a change to one part of the system can sometimes cause unforeseen problems elsewhere. And the situation is only getting worse. Networks are growing ever more complex and businesses are demanding ever more changes to be completed in ever less time. Making those changes while simultaneously ensuring networks remain error-free, secure and compliant isn't merely tricky: for many organizations it is becoming simply impossible.

Clearly, greater automation is the only solution. Systems and networks need to be monitored continually and holistically to ensure security and compliance aren’t compromised in any part of the set-up when changes are made.  To this end, the whole push towards virtualized, software-defined data centres and networks offers the tantalising promise that organizations will be able to operate and manage a dispersed environment (comprising multiple, segmented networks) centrally, through software – and ultimately without any manual intervention at all. The system will monitor all changes, understand the impact of those changes on every other part of the system, and will alert you to (or prevent you from doing) anything that compromises security or compliance policies.

The market is heading inexorably in this direction. Virtualization market-leader VMware, for instance, is touting its VMware NSX network virtualization and security platform as a way to secure software-defined data centres without having to set up multiple firewalls and internal security checkpoints. NSX includes a hypervisor-level firewall that examines all the traffic flowing through your dispersed, virtualized networks and which can be managed centrally, from anywhere.  Other major virtualization suppliers offer similar solutions.

While this is great when you have a fully virtualized environment, the problem is that moving to a software-defined data centre is not a rip-and-replace exercise, but a gradual transition. The reality is most organizations today are operating in a hybrid environment. Some things are running on virtualized networks (whether in an organization’s own data centre or hosted in the cloud somewhere) and others on legacy physical networks. A solution like NSX is only capable of managing the virtualized parts. Even when an organization completes its virtualization journey and has a fully software-defined data centre, this is still running on top of a physical environment. There are firewalls underneath connecting your data centre to the Internet, even if your virtualized applications can’t see them.

What IT security and continuity professionals need in this context, then, is a simple, automated way to monitor and manage their systems and networks as a whole, both virtual and physical, based on consistent policies that span the entire environment rather than only the virtualized bits. As organizations continue to gain ever more operational efficiency and agility through virtualization and automation, they also need to see these benefits in the legacy parts of their data centre. An organization may be able to set up a virtual machine in seconds, but if it needs to connect to something on the physical network (such as a legacy database) that has to be manually configured, this will be a bottleneck to realising the benefits of increased agility or efficiency.

Fortunately, the IT industry is well aware of the issue and we are now seeing the emergence of security policy orchestration tools that offer centralised and automated management of environments comprising both virtual and physical networks. Such systems can guarantee that consistent, organization-wide security and compliance policies are adhered to across the entirety of the IT estate, monitoring all changes and understanding their effects on every other part of the system, physical or virtual.

They can ensure all firewalls (again, irrespective of whether they’re physical or virtual) are always correctly configured in line with an organization’s overarching policies, allowing firms to realise the full operational and agility benefits of a software-defined environment without having to wait until they have completed their lengthy transition to fully software-defined systems and networks.

So how do you go about implementing such a system?  First you need to define a set of policies which are distinct from the technologies to which they will apply. For example, if your organization needs to be PCI-compliant, that requirement applies to your environment as a whole, not just to specific systems. It doesn’t matter if your networks are physical or virtual, or whether they’re hosted in-house or in the cloud, they still need to be compliant. In other words, you need to step back and abstract the policies that need enforcing from the technology doing the enforcement.

Just as networking professionals talk about the management plane, the control plane, the data plane and so on, today they also need a security policy orchestration plane that spans virtual, physical and cloud networks (which, of course, could be located anywhere) – and has the ability to hook into all kinds of technologies and hide their complexities behind a single, unified interface for control and automation.

When it comes to deploying security policy orchestration, it is beneficial to first connect it to your infrastructure in order to analyse passively how it all works together. In this phase, the system can give you valuable insights and alert you to potential issues – such as when something has been misconfigured, or when a change to one part of a system puts a chink in your armour elsewhere.

Although at this stage the system isn’t making any changes automatically, it can nonetheless give you a significant return on investment very quickly. For example, it speeds up the task of monitoring the infrastructure and making changes, easing the burden on IT staff resources.

But the biggest benefits come when you move into the second phase of deployment. That’s where you start to automate: allowing the system to proactively take control and make changes according to your security, risk and compliance policies. This is more complex, since it generally requires changes to the workings of the organization as a whole.  As well as technology, you will probably have to adapt your business processes, with all the associated cultural change this entails. How long it takes can vary from a few weeks to many months, depending on factors such as the size of your organization, the maturity of your existing systems and processes, the level of engagement from your senior management and the speed at which the business is expecting a return on investment.

One thing is for sure, though. Companies can’t afford to wait until everything is virtualized before they fully embrace automation. If they don’t make a start now on implementing the organizational changes necessary to support a future where systems will be ever more defined and controlled by software, then they increase their risk of downtime, security breaches, compliance failures and of falling behind more agile competitors. And as an increasing number of organizations begin to suffer the consequences, boards are likely to be ever more receptive to a technology like security policy orchestration which allows them to stay ahead of the curve, minimise risks and increase efficiency.

The author

Reuven Harrison is CTO of Tufin.

•Date: 7th October 2014 • UK/World •Type: Article • Topic: Data centres / centers

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here