Top 5 Considerations for Disaster Recovery Planning for Business Data

5 Considerations for Disaster Recovery Planning – Nasuni|5 Considerations for Disaster Recovery Planning – Nasuni

August 3, 2010

NATICK, Mass., August 3, 2010 – Whether it’s a natural disaster, software corruption, hardware failure, or simple human error, at some point business data will be put at risk. Effective disaster recovery (DR) planning must address five key considerations – downtime, data integrity, cost, simplicity and security – according to experts at Nasuni, the leading cloud storage gateway.

Data disaster recovery plans for safeguarding business data can include tape, disk, cloud technologies, or some combination of two or more systems. There is no universal DR plan that suits all organizations, but the following considerations are intended to lay the groundwork for disaster planning and preparedness.


Time is money: a study by Contingency Planning Research estimated the cost of downtime at roughly $18,000/hour for many businesses. How much time it takes to recover data – how long a business can afford to be out of business – is a high priority.

Recovering terabytes of data from tape involves first identifying which set is needed, requesting a delivery from the offsite storage provider, correlating each tape with a logbook, determining which to load first, then restoring data. This process may take days of working round-the-clock, depending on the amount of data. Recovering from disk storage devices kept on-site reduces time to hours instead of days, but disk itself is not a viable option if the place of business is damaged. Disk is also vulnerable to corruption and accidental erasures.

Data Integrity

Traditional data restores are often less than 100 percent successful – some files are simply gone for good. When these files pertain to customers, transactions, or anything else not easily reproduced, lost data becomes lost revenue.

As a data repository, tape is notoriously unreliable. With a full day between backups, inevitably at least twelve hours of data will be lost if not more. Surveys suggest that up to 20 percent of nightly backups do not successfully copy all data, and that 40 percent of tape recoveries fail completely. The more a tape is reused, the less reliable it becomes and the more likely it is to corrupt your files.

Disk mirroring provides data redundancy. In the event of a disaster, however, all data produced since the last backup will still be lost. Synchronous replication does not fully protect against data loss because if software is corrupted or data is deleted from the main server, or in a virus attack, that problem will be copied to backups, and older versions of files cannot be accessed. As with tape, disk is dependent on fragile hardware.


As a business becomes more reliant on continuous access to stored data, the more data it will store – and the more it must back up. In a Pepperdine University report titled The Cost of Lost Data, a single lost megabyte is calculated to cost upwards of $10,000.

Tapes themselves are relatively cheap, but considering their fallibility; it is not viable to be dependent on a medium that will lose data if it is bent or dropped. This same logic applies to disk as well, but that’s not all: disk mirroring essentially requires purchasing a duplicate set of servers for data. As data volumes grow, so do backup costs, often into the hundreds of thousands of dollars, or more.


Disaster is troublesome enough – recovery shouldn’t be. Traditional strategies effectively drop IT administrators into a maze of hardware and bookkeeping, whether it involves putting a tape in the drive and waiting or piecing data together from corrupted disks. Mirroring disk to a second site can improve reliability but often the task of finding a second data center and establishing a reliable link can be daunting and filled with unexpected complexity.


Just as primary storage must be protected, so must its backups. Most reputable backup facilities provide superb security, except that tapes are sometimes lost or damaged in transit. An improperly closed door on the back of a truck, and sensitive business data is on the street. With disk, the security of data is entirely dependent on internal security mechanisms. If they are secure, your backups will be secure, except for cases of human error or malicious attack.

Consider a Cloud Storage Gateway

The recent generation of cloud storage gateways dramatically simplifies disaster recovery planning. These gateways are designed to bridge your existing operations with the tremendous reliability offered by the major cloud storage providers. Gateways are typically packaged as virtual machines that are available in all the major hypervisors.

The State of Cloud Storage Infographic

The use of advanced caching algorithms allows data recovery from the cloud to be virtually instantaneous: if a critical server is lost, a quick download of the gateway virtual machine can reestablish the connection to the cloud so that all of the data is available. Leading cloud storage gateways allow customers to meet their recovery time objectives by prioritizing access to the most critical data first.

Cloud storage providers create multiple copies of data and store them in many disparate servers. Should a server fail, data is already safe in several others; plus, the data is automatically duplicated to a new server. All of the major cloud providers have data centers with thousands of servers, and can guarantee storage in multiple geographic locations – virtually ensuring that those with the right data will never all fail at once.

In the cloud, the price of disaster recovery is the price of data storage. Cloud vendors charge only for the storage used; while this alone cuts costs tremendously, storage software using data compression and deduplication reduces costs further. The size of a data set can be reduced dramatically before it is sent to the cloud, translating to a drop in price as well.

Customers that leverage cloud storage do not need to be concerned with tapes, or drives, or differing systems of inventory. In the event of data loss, recovery is as simple as logging back in to your cloud account.

Cloud storage involves inherent vulnerabilities, but provided all data is encrypted before it is transmitted to the storage service provider, and other common-sense steps are taken, the risks are practically nil.

“Cloud storage has emerged as a simple and cost effective means to implement offsite data protection for businesses,” said Andres Rodriguez, Nasuni’s CEO. “The combination of traditional backup with offsite data protection schemes stretches recovery time objectives well beyond what most businesses can or should tolerate. Cloud storage gateways offer fast restores at a fraction of the cost and pain.”

Follow Nasuni on Twitter: and Facebook

For cloud storage news and commentary on data protection, subscribe to the Nasuni blog at

About Nasuni

Nasuni was founded in 2009 by storage veterans to deliver a secure gateway to cloud storage that makes the cloud feasible for business users. The Nasuni Filer is a virtual NAS file server that runs on VMware and leverages the resources of the cloud to simplify file storage and protection. Targeting the mid-market, Nasuni’s solution eliminates the need for incremental storage hardware and the resulting capital expense to manage unstructured file growth. The company is backed by North Bridge Venture Partners and Sigma Partners. To download the Nasuni Filer, or for more information, visit


How-Nasuni-Cloud-NAS-Defused-a-Digital-Standoff How Nasuni Cloud NAS Defused a Digital Standoff

After the Lewis Group of Companies was held hostage by the Cryptolocker Trojan, Nasuni Cloud NAS snapshots helped defuse a digital standoff.

Read the related blog post »

Ready to dive deeper into a new approach to data infrastructure?