Tuesday, 1 September 2015

Cloud backup: Don't rely on your provider alone


Cloud backup: Don't rely on your provider alone

Cloud providers take care of your data, even in disaster. But you shouldn't leave the job solely to them

You've moved data to the cloud. Now it's time to talk about disaster recovery -- how to build a resilient system that can recover from catastrophic failure.
Amazon Web Services, for example, says its S3 service "is designed to deliver flexibility, agility, geo-redundancy, and robust data protection." To IT, that means the system is fault-tolerant, managing the resiliency needs for you. ("Geo-redundancy" means that, if a center goes down, another center in another part of the country or world will pick up the load. You should never miss a beat.) 
If AWS and other public cloud providers include a certain amount of resiliency services, does that mean your data is safe? For the most part, it is. Public cloud providers take great pains to see that data is not lost -- ever.
However, there additional ways that enterprises should think about this issue.

First, consider primitive-data resiliency approaches. This is the replication of data at the platform level to platforms that are not in the same location. For the most part, this is image-level replication, so it's an all-or-nothing approach.
This type of backup is handy if you have catastrophic failures and need to recover quickly to continue operations. If this is done correctly, people using the data should never know there was an interruption. Although this approach is effective, it's also resource-intensive and does not deal with any fine-grained data.
Second, consider record-level data resiliency approaches. This means that you deal with data backup and resiliency operations at the record or data level. You track the data, not only the image, and can perform auditing and logging procedures as part of the backup and recovery process.
This approach takes fewer resources, and it is handy for logging for auditing or compliance. Moreover, only the data that's changed is replicated, so (if done correctly) it should use fewer resources.

The trade-off involves automation versus DIY. Primitive-data resiliency is typically a part of the public cloud offering, so it's automated. You don't have to worry about it, but the provider does. However, the record-level approach is DIY, so you need to build in the data replication operations. There are tools that can help, but you build it, you own it.
The best path is to do both. This means you're doubly protected and can deal with auditing and compliance as well as basic resiliency.
You might think the "do both" option is akin to wearing a belt and suspenders, but it's relatively low cost and will keep you out of trouble. You don't want to be caught with your pants down, right? 

Source: http://www.infoworld.com 

No comments:

Post a Comment