IT disaster recovery, cloud computing and information security news

First the sprint, now the marathon: determining vulnerability remediation velocity through risk-based SLAs

Stephen Roostan looks at the concept of remediation velocity and its role in helping organizations gain control over managing technology vulnerabilities.

For many organizations, the journey to more effective and efficient vulnerability management can feel a bit like running a race in which the finishing line constantly moves further away.

At the initial sprint stage, significant time and effort is dedicated to implementing a series of tools to help automate, prioritise, and address the organization’s vulnerabilities. Having successfully operationalised a data-driven approach to risk-based vulnerability management, your teams are actually patching less but your enterprise risk scores are now at a level that is both manageable and acceptable. Your organization is not only prioritising the riskiest vulnerabilities, but identifying those that are likely to become dangerous in the future. You are ahead of the curve, and your risk score reflects that. And, IT and security are getting along, because there’s no real argument over which mitigation measures you need to take. The data is there, and it’s not really up for debate.

But after a year or so of plain sailing, the risk score suddenly jumps, and everyone panics.

This is the moment when security leaders need to disengage from auto-pilot and consider adjusting their vulnerability management programme in line with the organization’s maturing vulnerability management capabilities.

The reality is that new vulnerabilities appear every day and while most are harmless, occasionally something really serious will hit the streets. Since no one can control what malicious actors do, risk scores will periodically jump as new vulnerabilities arise or change in severity. This is nobody’s fault, it’s just part and parcel of the marathon effort required to sustain the long term cyber security defence effort.

At this point of the race, maintaining an acceptable risk score threshold now becomes the priority. Which means security leaders will need to realign their thinking to embrace a new concept: remediation velocity. And that’s where adherence to risk-based SLAs comes in.

Working smarter: utilising risk-based SLAs to determine speed of response

Risk-based SLAs make it easy for organizations to establish an appropriate speed of response to new high-risk vulnerabilities, based on their own individual appetite for risk. Providing meaningful data-driven recommendations for remediation, risk-based SLAs take all the guesswork out of how quickly IT and security teams need to respond to newly discovered vulnerabilities in line with their organization’s risk tolerance objectives.

We see organizational appetite for risk falling into one of three categories. The low risk category features companies that are content to remediate as fast as their peers. The medium risk category is ideal for those that want to remediate faster than their peers. Meanwhile, the high risk category incorporates organizations with the least tolerance for risk, who want their remediation strategies to exceed the speed at which threat actors are able to ‘weaponize’ vulnerabilities.

Rather than simply adhering to standard arbitrary 30/60/90 day SLAs, organizations that leverage risk-based SLAs are able to fine tune their response timeframes and better target resources to reduce risk more effectively.

The data behind risk-based SLAs

Leveraging real life intelligence and benchmark data on Mean Time to Remediation (MTTR) and Median Time to Exploitation (MTTE), today’s modern risk-based management platforms are able to provide risk-based SLA recommendations that directly correlate to an organization’s identified appetite for risk. The lower the organization’s risk tolerance, the faster it will need to remediate. But that’s not the only factor at work.

In addition to an organization’s risk tolerance, risk-based SLAs also evaluate two other factors – the asset priority upon which the SLA is being set, and the vulnerability risk score (high, medium, or low). It is the combination of all these risk-based variables that ultimately underpins the SLA timeframe recommendation that is set.

Metrics are shifting as vulnerability management comes of age

Over time, vulnerability management processes will mature as organizations move away from an ‘everything is a risk’ mindset to focus instead on fixing those vulnerabilities that pose the greatest risk first.

Following the initial sprint to achieve a stable risk score state, some key characteristics will mark the moment a vulnerability management programme comes of age. In most cases, IT operations can use a self-service model while security teams focus primarily on reporting, the oversight of mitigation efforts, and the handling of exceptions.

But to ensure the programme continues to remain stable and endures for the long term, security leaders will need to constantly realign the metrics they use to evaluate its performance and optimise how the organization responds to new high risk vulnerabilities as they appear.  And that means a shift in thinking away from away from risk scores and toward remediation velocity.

The author

Stephen Roostan is VP EMEA, Kenna Security.



Want news and features emailed to you?

Signup to our free newsletters and never miss a story.

A website you can trust

The entire Continuity Central website is scanned daily by Sucuri to ensure that no malware exists within the site. This means that you can browse with complete confidence.

Business continuity?

Business continuity can be defined as 'the processes, procedures, decisions and activities to ensure that an organization can continue to function through an operational interruption'. Read more about the basics of business continuity here.

Get the latest news and information sent to you by email

Continuity Central provides a number of free newsletters which are distributed by email. To subscribe click here.