IT disaster recovery, cloud computing and information security news

Flash-to-Flash-to-Cloud is on the way to becoming the standard solution for backup

The traditional ‘Disk-to-Disk-to-Tape’ (D2D2T) backup model is no longer adequate for always-on enterprises requiring rapid restore data and is being replaced by Flash-to-Flash-to-Cloud (F2F2C) says Peter Gadd…

In a D2D2T model, the primary disk creates a local backup on the secondary (backup) disk or on a Purpose-Built Backup Appliance (PBBA). This is typically followed by backup to tape media, which will be moved offsite or replicated to another PBBA offsite for disaster recovery purposes. This results in multiple backup silos and an expensive dedicated infrastructure for backup only. Added to this, is the fragile data durability on tape and additional hardware required for tape backup. From a functional point of view, the backup data on tape is also ‘locked up’ offsite in silos, so it cannot  provide any additional value, unless it is recalled.

There are two major flaws in this approach, particularly with regard to PBBAs. Firstly, they are inflexible. Typically, PBBAs are designed to copy data in one direction from source to destination and store that data through deduplication. In the meantime, the rest of the IT industry are rapidly adopting agile and elastic solutions. Secondly, PBBAs often deliver poor restoration performance, because the backup data is fragmented across both disk and tape.

These two problems lead to a slow and complex data infrastructure. Because recovering data from the backup appliance takes a long time, IT leaders may opt to deploy additional storage infrastructure to support other workloads, such as the Test/Dev environment. Snapshots of the production applications and databases are often also backed up to separate storage systems to enable faster recovery when needed. Nevertheless, disaster recovery does not guarantee that data can be recovered quickly and reliably.

PBBAs and backup data silos are the building blocks of an outdated approach; in today's data-driven world, companies want to use data to drive innovation and gain competitive advantage.

Historically, ‘cost per terabyte’ drove data protection strategies – now companies are hungry for ‘time-to-insight’. To accommodate this, entire data infrastructures must work faster, which also benefits backup. High transaction volumes and large amounts of data are no longer just about cost-effective storage, but about getting more out of the data. However, legacy data protection architectures can make it extremely difficult, if not impossible, to support workloads beyond backup and recovery.

In modern backup scenarios, Flash is already replacing the traditional hard drive and long-serving storage tape. Compared to the traditional approach, the new Flash-to-Flash-to-Cloud (F2F2C) backup model enables faster recovery and simplifies IT operations. The primary Flash storage system running Oracle or VMware for example backs up to Flash-based secondary storage for fast restores. Then, backup data is migrated to the public cloud such as Amazon Web Services (AWS), to take advantage of cloud economics and durability. Backup data on-premises or in the cloud is stored extremely efficiently through the use of compression and deduplication. This approach allows for faster restores, more flexible recovery, secure long-term backup data retention, and easier handling. Fast recovery takes place in the on-premises environment, while backup data in the public cloud is automatically used to support recovery operations when on-premises data is not available.

However, the interplay between on-premises and public cloud is not so harmonious in nature. This is because the cloud is not specifically designed for enterprise applications and the enterprise infrastructure is not as user-friendly as the cloud. In the cloud era, organizations should be able to make infrastructure decisions based on what works best for their environment and business objectives. Extending a data-centric Flash architecture into the cloud offers businesses more mobility and flexibility. You can also utilise the cloud for backup and data protection, as well as developing more complex webscale applications, with advanced storage features such as snapshots and multi-zone replication. In addition, the use of an object storage deduplication engine is useful.

Businesses can replace traditional purpose-built backup appliances with Flash-optimised software environments for fast recovery, and storage tapes through cloud object storage for external data retention. This new F2F2C backup architecture modernises backup and recovery processes by combining Flash-optimised software and the public cloud, resulting in flexible reuse options for the ‘previously considered as a burden load’ backup data. Public cloud storage offerings provide a reliable foundation for creating value-added features and supporting enterprise applications across the enterprise and the cloud. New storage solutions deliver a unified hybrid cloud experience, consistent APIs and automation for developers, as well as backup and backup options in the public cloud.

The next-generation data protection architecture should be based on scale-out storage systems designed from the ground up for unstructured data - providing unmatched performance for a wide variety of workloads. Fast backup and recovery, as well as Test/Dev and analytics operations can be consolidated on a single storage platform, and all of this can happen seamlessly in a single hybrid cloud in a large environment. The storage platform must therefore provide consistent storage services, resilience and APIs for on-premises environments and various cloud models. This helps to ensure that once-created applications can be executed almost everywhere. This is especially true for data backup as backups can be created on-premises, then migrated to the public cloud and retrieved as needed. Depending on the requirements, the amount and the sensitivity of the data, businesses can decide whether the data should migrate to the public cloud or be stored locally. This decision should take into account the various backup, restore, and long-term data retention options available in the public cloud or on-premises.

It is never too late to modernise the data protection environment and solve the problem of infrastructure complexity and costs. This is easily possible today - with a uniform, fast and easy Flash-based data platform, based on the F2F2C approach.

The author

Peter Gadd is RVP of EMEA Core Markets, Pure Storage.



Want news and features emailed to you?

Signup to our free newsletters and never miss a story.

A website you can trust

The entire Continuity Central website is scanned daily by Sucuri to ensure that no malware exists within the site. This means that you can browse with complete confidence.

Business continuity?

Business continuity can be defined as 'the processes, procedures, decisions and activities to ensure that an organization can continue to function through an operational interruption'. Read more about the basics of business continuity here.

Get the latest news and information sent to you by email

Continuity Central provides a number of free newsletters which are distributed by email. To subscribe click here.