Data is the currency of the digitalized world and its importance in today’s business landscape cannot be overestimated. Protecting this investment requires data resilience strategies to be in place. In this article Alberto G. Alexander explains what data resilience involves.
One of the most valuable organizational assets is data and protecting this is an ongoing issue for most businesses. Data resilience is a way to handle this more effectively and with reduced impacts should a successful cyber attack or data breach take place.
The resiliency of any information technology system (a network, a server, a storage system, or a data center) is defined by “its ability to rebound quickly and resume its original operation after a sudden halt,” (Strengholt, 2020). Data resilience is like a bounce-back procedure. It is a plan for the unforeseen circumstances and contingencies that may come up during the course of running a business. Data resilience is a well-structured aspect of a facility’s architecture for maintaining data protection and taking action when breaches occur. In essence, data resilience entails fortifying data against threats and recovering jeopardized data. These features make it crystal clear that data resilience helps ensure an uninterrupted business.
What is data resilience?
Data resilience is “having an organization's data always available and accessible despite unexpected business disruptions such as cyber attacks,” (Eslamian, 2022). It is an organization’s ability to ensure business continuity despite any unexpected disruption. It leverages an automated approach that standardizes data protection and provides centralized visibility and management across all workloads and locations. When data is resilient, it can’t be accessed or modified by unauthorized entities.
“Data resilience refers to the durability of an IT system when faced with potential issues,” (Hogan, 2019). Data is resilient when tools and systems can automatically detect and mitigate problems that could result in data loss. Of course, that also includes restoring compromised data. Data resilience is also intimately tied to the IT concept of high availability. That’s where systems are designed to function for as long as possible without failure.
Data resilience is fundamentally important to businesses of all sizes. That’s because of its ability to reduce downtime and allow for business continuity. Because of the heavily data-dependent nature of modern business, IT downtime can cost a company considerable sums of money.
Data resilience can also help a business in preparing for a variety of different IT contingencies. Disaster recovery, for example, is closely linked to data resilience as having continual access to data is an important part of bringing technology assets back online after a disaster occurs.
Data resilience strategies
The resilience techniques employed by an organization will vary depending upon the workload and normally the prioritized data resilience techniques are those that deal with mission-critical data workflows.
A business has to consider several things when choosing the appropriate data resilience techniques, such as: (1) business needs, (2) the general business continuity plan, and (3) putting in place recovery time objectives for several data functionalities, depending on the data usage.
An all-inclusive data resilience technique might entail several strategies such as snapshots, backups, synchronous and asynchronous replication, mirrored copies of data, and off-site redundancy, among others.
Data resilience management best practices
There are several managerial practices that can be adopted when implementing data management across an organization. These include:
Reduce duplicate data
There are many reasons to purposely duplicate data. For backup purposes, for example, as well as for creating disaster recovery copies of data, and for version control. There are also cases when similar data is created, due to repeated processes. The result can be that unnecessary data is stored. Setting up a manual or automated process that audits data regularly and removes duplicates can help you better manage your data and reduce the cost of unnecessary use of storage. This also keeps the data clean and ready for analysis and queries.
Focus on data quality
Maintaining a high level of quality is critical to ensuring the organization’s data is usable, and relevant. Companies do not need to retain all the data they generate. “In many cases, an organization may create an expensive ‘data swamp’ that collects low quality or irrelevant data, which cannot really be used,” (Hogan,2019).
Another way to ensure data quality is to continuously validate data accuracy. Keeping old data is useful for business analytics, but before retaining it, make sure it is accurate, relevant, and actually suitable for ongoing analysis. The same goes for real time data generated by production systems.
Prioritize data protection and security
The data management strategy should be continuously updated to meet data security and privacy standards, as set by regulatory entities where the organization does business. Here are key data protection measures to keep your data secure:
- Access control: these controls enable the organization to specify privileges for each type of user. The goal is to prevent the abuse of credentials.
- Encryption: turns the data into meaningless code, which can only be deciphered by keys controlled by the organization. The goal is to ensure that important data cannot be used even if accessed by unauthorized individuals.
- Physical security: using techniques such as hardening to help secure data stored on devices and ensuring there are strong security measures in place in the physical facility.
Setup monitoring and alerts
Monitoring processes and systems help the firm gain visibility into the data repositories. Monitoring processes should be based on metrics that provide specific and actionable insights into important patterns and events affecting the data.
The more data, the more difficult it becomes to maintain visibility. To extend the reach and control, the organization can leverage automated data classification systems and monitoring tools that use behavioral analysis; generating alerts only if behavior deviates from the norm, to minimize false positives.
A good data resilience strategy is essential for any type of organization. It enables the management of rapid data growth and helps the organization unify data recovery and quickly get back up and running after any event that compromises data. It brings many benefits, including enhanced performance, reduced costs, reliable and efficient business operations, minimized risk, and strong protection in every part of the organization’s operations.
Dr. Alberto G. Alexander holds a Ph.D from The University of Kansas, and a M.A., from Northern Michigan University. He is a MBCI, BCMS, ISMS and QMS, IRCA Lead Auditor and Approved Tutor. He is the managing director of the international consulting and managerial training firm www.gerenciayproductividad.com located in Lima, Peru. He can be contacted at email@example.com. He is a professor at the Graduate Business School of UESAN, Lima, Perú.
- Strengholt, Pythian, Data Management at Scale, O’ Reilly Media, 2020.
- Eslamian, Saeid, Disaster Risk Reduction for Resilience, Springer, 2022
- Hogan, Lara, Resilient Management, A Book Apart, 2019.