Backup’s 3-2-1 Rule is Outdated

July 6, 2021

A myopic focus of the backup vendors on durability at the expense of recoverability has left enterprise customers reeling from ferocious ransomware attacks. In my previous post I discussed how effective data protection can, and indeed must, deliver both data durability and fast recoveries. A versioned file system can achieve both, but the backup vendors and experts mislead IT professionals by claiming that this approach doesn’t deliver sufficient durability because it fails the classic 3-2-1 test.

Our understanding of durability needs to be updated to reflect the massive technological change taking place within IT infrastructure. In this post, I will detail how the old 3-2-1 backup rule is used against versioned file systems, explain why it needs to be updated to the era of cloud storage, and show that a versioned file system that lives in the cloud is just as durable as backup and vastly superior when it comes to recoverability.

The 3-2-1 Rule

The 3-2-1 rule of data protection stipulates that data is only properly protected if you have 3 copies on 2 different media, and that 1 of these must be offline. It’s a good strategy. I am not going to challenge its core idea. Yet I am going to push back against the assumption that only backup can satisfy 3-2-1. That is just not true. Cloud-based file versioning satisfies 3-2-1 completely, but this is something the backup industry does not want anyone to know.


Let’s start with 3. Everything Nasuni writes to the cloud has three copies. These three copies live in at least two separate data centers. This behavior is guaranteed by the Service Level Agreements issued by all of the major public hyperscalers.


As for storing this data on 2 different storage media, I will point out that when this rule was developed, the backup vendors were talking about one copy on a hard drive and another on a tape or a CD-ROM. That is no longer even relevant. Half the people reading this article probably don’t even know what a CD-ROM or tape even is. More recently, the 2 has been re-interpreted to mean two different systems or data formats.

Both interpretations of the rule can be supported in modern versioned file systems. Two separate systems implies object-level portability across hyperscalers. The ability to generate an exact object-level copy of UniFS across different object storage systems in order to support migration or a second system copy was built into UniFS from its inception.

Instead of storing copies on 2 different media, companies can select multiple regions within one cloud provider, and store their data in two geographically distant locations. Cloud providers don’t operate their software uniformly, so you’re really looking at distinct instances from one site to another. Or, if you aspire to another level of protection, you could store your data with two entirely different cloud providers. One copy of your file system could reside in Azure, AWS, or GCP, for example, and another could live in another provider.


This leaves us with the 1 offline copy, possibly the most outdated term in the rule. Referring to one offline copy, it meant that this storage medium could not be connected to a network. File systems have become too large and no one really has the time or desire to make an offline copy of the complete data set.

Every medium today is some type of random access read/write high density storage device; typically, spinning drives and solid state drives. They also must be connected to the network in order to be of any use. An offline copy is just not practical because there is no such thing as a disconnected system anymore. There is, however, a way to get the equivalent level of protection without requiring that the “offline” system not be on the network.

We need to return to first principles. The reason early backup proponents insisted on the 1 in the rule was because they wanted to protect that last copy of data from being deleted in the event that both the main production system and its secondary backup (the 2) system were both compromised. The goal was to only allow deletes in a controlled way that was independent from the primary and secondary systems they were trying to protect. So in this sense, “offline” is really about separating that last copy from both the primary and secondary systems.

The hyperscalers have all addressed this with the soft-delete feature that gives you an independent way of unwinding any errors in the primary system. By forcing a time delay between delete commands and the execution of those commands, hyperscalers have recreated not only the offline copy but the whole concept behind tape rotation where a set time interval separates anything that happens in the production system from taking effect in the “offline” system.

Cloud Versioned File Systems and the New 3-2-1

A versioned, cloud-native file system like UniFS® is just as durable as old-fashioned backup because it achieves all the goals of the 3-2-1 strategy. UniFS meets the 3 copies requirement. The fact that it allows our enterprise customers to store data in two different cloud regions, or with two different cloud vendors, addresses the rule of 2, and it does so without forcing companies to rely on outdated, endangered technology. Finally, the soft delete feature developed by the hyperscalers solves the potential delete problem that originally sparked the demand for one offline copy.

Recoveries With a Versioned File System

Over more than a decade of operations, UniFS has proven itself to be as durable as backup. Our enterprise customers have been perfectly happy to have 3 copies of their data in one cloud provider, with no offline copy. Storing data in 2 cloud regions has proven to be just as durable and resilient an approach as the old two different media rule. We have never experienced a data loss event because of a failure of cloud storage hardware. The one time that we did have an incident in our versioning system – caused by a glitch in our Retention Policy – the cloud’s soft-delete feature totally mitigated the impact on our customer. Functioning like the offline copies of old, the cloud worked to protect the primary system, and we moved forward without incident. And we showed, once again, that 3-2-1 is a sound and complete data durability strategy.

By design and in practice, UniFS is just as durable as backup. Yet as I explained in my last piece, its primary advantage over backup may be that it does not move your data around and, as such, enables much faster recoveries. All it takes to restore is to point back to healthy data. You don’t need to copy large amounts of files or rebuild and repopulate file servers.

In my next post on this subject, I’ll walk through the steps involved in recovering data via backup vs. doing so with a versioned file system, but first I’d like to return to a point I made in my first story in this series. It’s an essential distinction between traditional backup and Nasuni’s file data protection approach that more and more companies are starting to understand. By protecting your data in the cloud and enabling fast recoveries, UniFS doesn’t just take care of your files. It protects your business.

Isn’t that the point?

For a deeper discussion on why backup is broken, register to join my live webinar on July 21st.

Ready to dive deeper into a new approach to data infrastructure?