Tawnya Lancaster considers the importance of access management and control for the zero trust model of information security and how it can contribute to a more dynamic model for resilient, flexible information security.
The impact of COVID-19 on IT infrastructure and the workforce (and the subsequent acceleration of digital transformation) have pushed zero trust (ZT) to the mainstream. Globally, enterprises of all sizes are now strategising how to use ZT as a model for redesigning security processes, policies, and controls. Top of mind for many of those organizations are how the ZT model can improve the ability to adapt quickly to future changes in their workforce, the continued disaggregation of the network, and evolution of threats.
COVID-19 has fundamentally changed how security teams think about protecting the business. However, it’s not the only factor. As businesses increasingly embrace mobile, cloud, and edge computing, network traffic patterns are changing. This is due to the expectation for anytime, anywhere access to the network and the emergence of technologies that require security to be delivered closer to individual users, devices, applications and data. Enterprises are also challenged with an increasingly large and complex attack surface. This is driving security leaders to look for ways of simplifying processes and policy, consolidating security tools, and minimising risk. Underpinning all of this is the need to quickly adapt to the unforeseen threats that will arise with new technology.
Implementing effective network security controls is foundational to zero trust. However, just as importantly, organizations should be considering how to tightly manage and control access to the business without creating operational friction. This is particularly true as businesses plan to return and recover from the pandemic while being challenged to optimise their investments in cyber security technologies and services.
Effective access management now starts with the realisation that legacy technologies are waning in efficacy. For example, using source IP addresses are problematic because they were never designed to authenticate traffic. Authentication was handled by technologies at higher levels of the stack, typically the operating system (OS) and application layers. For network connectivity, this default results in an excessive amount of implicit trust that is unreasonable in today's digital environment.
Additionally, organizations need to understand that ZT principles cannot be superimposed on legacy network infrastructure and operations. A critical part of implementing ZT strategies includes rebuilding or refreshing the network with the intention of moving partially or completely to network virtualisation (NV) or the cloud. Using NV and cloud services, security professionals can more easily make use of software–based technology to achieve more granular network segmentation and to centralise security controls.
They can also take advantage of analytics, programmable orchestration, and automation to quickly turn off and on network and security controls for applications, devices, users, and data as needed. This helps provide security that is pervasively infused into the network fabric. As a result, the network itself is a tool that contributes to the overall security of the enterprise.
Like any security framework, zero trust has evolved over the years, influenced by practitioners and ongoing industry feedback. Though access control has always been a part of ZT, in 2019 Forrester ‘formalised’ the requirement to ‘limit and strictly enforce access control’ into a core, critical pillar of the framework, making it of equal importance to network security.
This move to emphasise the importance of managing and controlling access reflects a larger shift in the industry to distributed, pervasive, security. Security professionals are realising the need to push security to the edge and to focus as much on the entity requesting access as on the data or service the access is being requested for; and why the request is being made. This is a direct result of the digital revolution and perhaps, most expressly, cloud adoption; the emergence of edge applications and the decentralisation of the global workforce.
There are several critical concepts related to access management and control to consider:
- Visibility and inventory: Identify and map the flow of sensitive data; create and maintain a comprehensive inventory of hardware devices, applications, and other software and hardware assets, including gaining visibility to all devices that access enterprise IT systems. Furthermore, catalogue information about the security posture of each and use this information to help establish trustworthiness.
- Microsegmentation: Isolate applications and devices closer to the workload, including setting up microperimeters by using software–defined perimeters (SDPs) or other controls, including next–generation firewalls (NGFWs), containers, and native APIs of the cloud fabric.
- Least–privilege access: Exercise the principle of ‘least privilege access’ by creating dynamic security policies and extending multi–factor authentication for user, machine, and mutual authentication.
- Monitoring and enforcement: Evolve to continuous monitoring of risk and trust for each entity (users, devices, and applications) by developing a risk/trust engine that makes use of machine learning (ML) and ad hoc rules to score all network flows and connections; strictly enforce policy; and create a feedback loop to deliver application metrics and data back into the system to further inform risk/trust decision making. (This area tends to be the most difficult to roll out and execute.)
Building a ZT–based programme that supports strict - yet dynamic - management and control of access is a challenging transition in which development, networking, and security teams will need to align more tightly. For most, this journey will be iterative. More importantly, there is no one–size–fits–all model for ZT. Every organization has unique business drivers and risk tolerance that need to be considered. Also, remember that a framework for access control will evolve along with changes in an organization’s IT systems, new technology advancement, and changes in the threat landscape.
As enterprises embark on this journey, there are many questions to consider that include but are not limited to the following: How does the organization identify a device? (In many cases, this may require a custom solution on certain devices, as there is no solution that will work on every device an enterprise is managing.) How do they enforce and track actions? What percentage of processes should be automated? How do they provide for accuracy in the decision engine? What is the process to quickly resolve false positives, i.e. an entity being denied access? And no doubt, as identity and access management technologies continue to mature, the models we use to frame our processes will continue to evolve as well.
One thing we do know is that network and security transformation, including the transformation of managing and controlling access to the network, is no longer a ‘nice to have’. These things are now critical to enterprises being able to move users, data, and applications virtually across the business, while also helping to protect the business from the known and yet–to–be identified security threats of emerging technologies. To gain the necessary maturity in dynamic access control required for this transformation, organizations must be using software–defined solutions for their networking and security controls. Virtualised controls improve the security team’s ability to quickly react to threats, adjust infrastructure and access, and reconfigure security policies to support business resiliency for today and tomorrow’s turbulence .
Tawnya Lancaster is Lead Product Marketing Manager, AT&T Cybersecurity.