Disaster recovery means different things to different people. In it’s most basic form, disaster recovery means simply having a full system backup on the ready should lightning strike or a flood douse a company’s servers.
But that definition is quickly falling by the wayside, rendered obsolete by the demand for uptime and growing amounts of data that business and apps need to operate. If a recovery consists of over 100 terabytes of data, it could take days or even over a week to get up and running again as data is transferred and configurations are put back into place. With pedabytes of data, a timely recovery is flat out unfeasible. Backups are no longer a viable solution for companies that lose thousands and tens of thousands of dollars every minute their site or app is unavailable to customers.
Experts advise the focus of a modern day disaster plan should be business continuity, not just recovery. That means storing a synchronized, updated copy of an entire app environment, including network and security settings, configurations, patches, and data, which is ready to run so that downtime is minimized.
The status quo for disaster recovery today, at least for bigger companies, means keeping a “hot failover” solution in place. When the main data center goes down, a second identical one instantly kicks in and the end user only experiences a slight delay. But for small businesses, investing in physical servers, real estate, staff, and electricity to operate a backup database that will be used only a small fraction of the time is too cost-prohibitive.
Both of these solutions are complex and cumbersome to test. So cumbersome, in fact, that in a survey of 343 IT executives responsible for disaster recover, more than half said they tested less than once per year. Only 42 percent kept up with industry standards, testing quarterly or more to see if their disaster recovery plan would actually work.
Enter the cloud, where virtualization providers offer a cheaper way to maintain a backup database. Many disaster-recover-as-a-service (DRaaS) companies can use the cloud to automate the testing process, removing the burden from IT staff’s shoulders.
Most recently, advancements in a technology called “pre-recovery” is taking form, which should lower the cost barrier for small- and medium-sized enterprises. Increased competition will drive prices down further in the future. Pre-recovery creates backup images like in the basic process mentioned above, but then rebuilds these backups into fully operational replica systems in the cloud. The recovered systems are then tested so that all the work normally done during a recovery is already finished in advance.
Using this option, recovery is nearly instant, only requiring a reboot of the system. No costly investment into a completely separate data center is needed.
Disaster recovery in the traditional sense will soon be deprecated in favor of more advanced, instant failover solutions like pre-recovery. Some experts predict disaster recovery as we know it will soon be a thing of the past entirely, as businesses opt for live-live (a.k.a. active-active) solutions over copying data from one data center to another. Each active data center contains local restore points. A live-live solution on two or more data centers allows for maintenance with less risk of downtime.
IT executives have been slow to move to the cloud however, citing security as their biggest concern. An increasing number of businesses are migrating to hybrid cloud solutions, which combine on-site physical servers and virtualized private cloud servers. This makes disaster recovery a more complicated process, but DRaaS solutions do exist and, should the growth of hybrid cloud continue, more are on the way.
“Hindenburg disaster” by Rupert Colley licensed under CC BY 2.0