Why a global file system should be a core component of your business continuity strategy
- Published: Friday, 31 May 2019 08:28
Recovering unstructured data after an outage can be a significant challenge, but one which can be made significantly easier through the use of a global cloud-based file system. Warren Arnold looks at the issue…
Business continuity and disaster recovery solutions have been effective in reducing downtime from days to hours; however, modern solutions are typically reliant upon backup files which are normally inactive and need to be tested and restored. That takes time.
To further complicate things, not all backup files are usable, and some can contain malware. As a result, IT teams often must go deep into their archives to find the best version to restore, and the deeper they go, the more data, time and productivity the organization stands to lose.
The good news? There is a better way to manage the recoverability of unstructured data after an outage, which isn’t through the backup application and files, but rather, how the file system itself operates. Today’s global cloud-based file systems do more than change how enterprises store, use and collaborate with data and files. They also provide IT with a powerful, fast and effective away to address backup and disaster recovery-related tasks.
Your data is at-risk
Eighty percent of enterprise data is unstructured, comprised of graphics and files, including those for computer-aided design and computer-aided manufacturing (CAD/CAM), to name a few examples. Data normally exists in three states: at-rest, in-use and in-transit. But data also exists in another state: ‘at risk’. Both structured and unstructured data are at risk from the likes of everything from natural disasters striking a data center / centre to malware or ransomware attacks.
Your best recovery option is not your traditional backup
Cloud computing has upended enterprise IT as we know it. Increasingly, organizations of all kinds utilize cloud object storage as the most reasonable repository for their unstructured data and files. Initially this was done to overcome the capacity issues that for so long plagued traditional approaches to storage while simultaneously enabling organizations to embrace the inherent resiliency and economy of the cloud.
A cloud-based file system takes this evolution one step further – enabling IT to use cloud object storage for primary storage while still enjoying the control and performance they have come to expect from traditional network attached storage. In this way, a cloud-native global file system enables an enterprise to realize the benefits of the cloud, centralize corporate IT’s control over the global file share, and still give users immediate access to the files they need as if they were on their own desktop.
Enterprises also deploy global file systems to coordinate document contributions and version control for their global teams. Some of these systems can be configured to snapshot file changes every 15 minutes, and more frequently for very active, or hot, data. If an enterprise leverages a cloud or hybrid cloud environment, these same file deltas, fully encrypted with customer-owned encryption keys, can also be sent to the organization’s cloud-based storage, where the gold copies are also secured.
Public cloud providers like Amazon Web Services, Azure and Google, maintain high levels of availability and data durability through redundancy and co-located data centers at a level most enterprises can’t begin to match in their own environments. In this way, the cloud has emerged as the ultimate medium in which to store critical data - a stark contrast to prevailing perceptions just a few short years ago.
In addition, writing new data to the cloud as Write Once Read Many (WORM) objects prevents any data from being overwritten or corrupted. Maintaining separate metadata versions for each snapshot also allows for fast restores of metadata, while also enabling enterprises to quickly access any urgently needed files stored in the cloud without a full restore or migration which can encompass many hundreds of gigabytes or many terabytes of data. Because migrations from the cloud can take time, to do this quickly requires a true, cloud-native global file system rather than a simple data backup to the cloud.
This combination of solutions – cloud object storage and a cloud-native global file system – can make restoring unstructured data, including all of the application files required to conduct business today, easier than ever before. And, depending on the data in question and other factors, this approach enables enterprises not only to meet their recovery time objective (RTO), but in most cases to be back in operation far faster. Of course, being back in operation in seconds or minutes is very different to being out of commission for hours or days!
If this approach to safeguarding all your unstructured data is one you’d like to pursue, there are a few best practices to keep top of mind. For starters, not all file systems function the same. A system designed for and relying on local disks will not be able to achieve the required granularity, and it will quickly run out of space as only a limited number of snapshots can be stored locally.
In contrast, a true cloud-native, global file system saves data, including snapshots, directly to the cloud, where capacity is not a limiting factor. With this kind of configuration, a snapshot becomes much more than a backup, but rather a true, immutable point-in-time copy. By storing the master copy and all its metadata in the cloud, it also alleviates the immense volume of data that needs to be restored after a loss, resulting in a much speedier recovery.
Increase protection further with WORM
As mentioned earlier, to take data protection a step further, each snapshot should be written as WORM objects, ensuring that the data’s integrity is preserved and that all restores can be performed from a viable, clean version. Local data snapshots are capable of being corrupted by malware, or in some cases hardware malfunctions, making them unusable for system recovery. Since WORM data cannot be altered, IT recovery teams will have a wide selection of immutable options from which they can select the best restore point.
Increasing the resiliency and speed of recovery using a true cloud-native global file system and a public cloud or private object store in a hybrid model is an extremely cost-effective way to reduce interruptions to business continuity. Depending on the frequency with which a business chooses to configure its snapshots, it may only miss a few moments of productivity before the recovery is made. And barely missing a beat due to an unforeseen event is something all business continuity teams are after.
Warren Arnold is senior technical marketing manager at Nasuni. The company’s platform enables enterprises to embrace a new approach to file storage, synchronization and collaboration that combines the performance and control associated with traditional network attached storage and the unlimited capacity, inherent resiliency and economy of the cloud. With more than three decades of information technology expertise in senior-level sales and systems engineering roles, Arnold provides the technical expertise needed to deliver detailed, accurate evaluations for customers, presentations and training. Prior to joining Nasuni in 2011, he developed and led the sales and systems engineering program at EqualLogic. Previously, he held system engineering management positions at Lucent, Ericsson, Chipcom-3Com and Harris Corporation.