Cyber security breaches: hiding in plain sight
- Published: Monday, 15 June 2015 08:15
In a world of constantly emerging threats, security is a tough job: but the concepts of best practice have been devised for a reason. The challenge for organizations is to attain that balance between unworkable change control practices and an anarchic environment that provides ample opportunities to hide.
However strong the perimeter security, in the vast majority of organizations there are far too many opportunities for hackers or malware attacks to slide in undetected.
Forensic-level monitoring of system changes provides a means whereby subtle breach activity can be exposed, but just having the means to detect changes is only part of the solution.
In the same way that seemingly clear pond water is revealed to be teaming with life when placed under a microscope, the amount of noise created on a daily basis by critical upgrades, system patches and required updates once visible is overwhelming. And when it comes to breach detection, it is virtually impossible to distinguish between the expected file and registry changes prompted by these changes and nefarious activity.
Change control provides a procedural means to compartmentalize expected change activity and, in theory at least, isolate unexpected changes including breach activity. But despite pleas from security experts and the advice of best practice guidelines, including ITIL, COBIT and FISMA, to deploy effective change control, the vast majority of organizations simply cannot make it work. For most, the constraint of highly bureaucratic change requests creates delays in urgent system changes that add more risk – it is perceived – than a potential security breach.
Yet the business damage that occurs when hackers can hide in plain sight in this way is widely recognized. Mark Kedgley, CTO, New Net Technologies, explains how to automate intelligent change control using file integrity monitoring to cut down the noise, distinguish the unexpected from the planned and, finally, close the change control loop.
Fighting against change
From Target to Home Depot and most recently the Carabank APT, estimated to have stolen $1B from banks around the world, the fall out of a major breach is horrendous. The negative publicity is even worse when monitoring tools were in place, had identified a suspicious system change or two and were sending out alerts to raise the alarm. The fact that alerts simply got lost in the noise is no excuse. Did no one have the time to investigate or, more realistically, did no one actually believe a breach was likely?
This perception of risk remains a real problem. For all the high profile security disasters, the damage to both reputation and revenue, most people in IT and security still have an ‘it won’t happen to me’ attitude. To be fair, this is not just head in the sand – it is a realistic response to the fact that achieving system lock down and any kind of actionable breach visibility are, for most companies, simply unattainable.
The problem? Noise. File integrity monitoring (FIM) is a great tool and an essential component of the security toolkit. It provides a complete view of every single change that occurs across the IT infrastructure, but unless it is used hand in hand with rigid, zero-tolerance, change control, the amount of noise generated on a daily and weekly, let-alone monthly, basis is unmanageable.
Who has time go through the information deluge that occurs after each set of Microsoft patches is applied to check that nothing untoward has been buried unnoticed within the system changes? The alternative, a centrally imposed, bureaucratic change control process is equally unworkable because it places unrealistic constraints on individuals desperate to respond to new security threats or calls from the business for an urgent fix.
Closing the loop
So what is the solution? Organizations simply cannot leave the door open and allow the ill-intentioned exploit of this continual state of change to wander in unchallenged. There has to be a way of cutting down the noise; of distinguishing the mass of justified change from the unexpected – and doing that in a timeframe that makes investigation both possible and useful.
But how? In theory, a company could take a manual snapshot of an individual machine pre and post patch and identify the changes. But with hundreds of files added or updates made during patching, plus registry and security policy changes on each and every machine, this is simply not possible across any size of IT estate. Furthermore, not all of these patches will be applied at the same time, or even in a predictable manner due to re-boot requirements: there is no way of keeping track of this process manually.
However, it is now possible to use automated, intelligent FIM to provide a list of all the changes that occur as a result of that patching exercise on one machine and create a ‘patch blueprint’. Ideally, patches will be pre-staged on a test or non-production machine prior to roll-out anyway to verify them as having no negative impact on application operation, but any one installation can do the job. With the blueprint in place, each time that same sequence of changes is spotted anywhere else across the IT estate, the activity observed can be confidently attributed to known, named patches, delivering true ‘closed-loop’ intelligent change control. This not only copes with the huge volume of change that occurs but also manages the time issues associated with the long tail of patching exercises. With each change analyzed automatically in real-time, all that is left for investigation are the truly exceptional events including potential breach activity.
By pre-staging patches as part of the regular patching process and extracting a template of changes, a company can roll out the patches across the entire organization, knowing that all changes will be automatically compared with that blueprint. While a company will generate a huge amount of noise when FIM is first turned on, by tracking the IT infrastructure through a few cycles of weekly patching, systems upgrades and updates, the organization can quickly categorize good behavior - leaving just the unexplained and unexpected for investigation or review. Malware and other breach activity – if present – will be present within these residual changes.
In practise, within a month or so, the information deluge has been reduced to a highly filtered, highly categorized and labeled view of the entire IT infrastructure. The other compelling aspect of this approach is that it removes or at least lessens the need for strict and bureaucratic change management. Because the volume of noise created by patches and upgrades is reduced, necessary changes can be allowed to be made immediately as required and then retrospectively assessed to confirm that the changes made on the system dovetail exactly with those planned. As a result, good change control is still employed – but without the constraints of time and bureaucracy that too often derail best intentions.
Once an organization is in this position of homeostatic equilibrium, it is pretty straightforward to head off breaches and malware infections – control has been attained without imposing unworkable constraints over regular day to day activity.
Conclusion: it’s a balancing act
Enabling every IT team, of any size and maturity, to confidently embrace change control is a massive step forward in IT security. However, this is not just a question of making the process simpler and easier to manage; it is about a change in attitude. Organizations need to accept that an attack is not only possible but that it is likely: and that requires a mind-set shift.
Mark Kedgley is chief technology officer, New Net Technologies.