“For years, the traditional storage controller has been a game of RAM and disk. Put the cloud inside and what you have is a third component.”
In this clip from Tech Field Day 2011 Nasuni CEO, Andres Rodriguez, explains what the next generation of storage controllers looks like and how integrating the cloud changes everything.
ANDRES RODRIGUEZ: For years, the storage controller has been a game of RAM and disc. If you put the cloud inside, all you have to think of is you now have a third component, the cloud. And think of it, its relationship to disc, and its relationship to RAM, is one in the same. I’m not talking about tiering system, I’m talking about a caching system. The job of a controller, right, is to deliver consistent high performance, right, control over your data, by basically playing a game among multiple media types that have different properties. And in this case, these two used to be a lot more similar. In this case, there is actually a semantic difference between these two. That’s what’s hard about it, you’re actually trying to reconcile if what you want is a real nest, a real storage controller, you’re trying to reconcile two worlds that are very, very different, right? Disc is what we know, and cloud is what I’ve just showed you. Why bother? Why bother doing that? If you can do it such that you can inherit the best properties of the cloud and pass them through, you could have something very useful to IT. The things we want to inherit is unlimited. Right? This thing up here, right? It’s unlimited. Unlimited. You want to inherit the fact that this thing is protected. And you want to inherit, this one is kind of tricky, the fact that this thing is available. Highly available from anywhere.
Now, when people first see this, their first reaction is, I don’t have unlimited storage. And no one has unlimited storage, right? But there are very significant, very visible benefits to having unlimited like storage. If you have unlimited like storage, you never have to worry about migration. Your storage systems never fill up to capacity, and therefore you never have to think oh shoot, now I have to take everything that I had in this volume and migrate it over to this other volume. Right? If you have unlimited storage, right, that means that you’re never looking at the gas tank going down. It’s like driving a car that essentially runs forever on whatever fuel you have. The biggest benefit is you never have to go to the gas station. You don’t waste time. If you have something that is protected, and it’s really protected, and we’re talking multiple data centers, thousands of servers, etc., etc., you know, three, five copies of every element, if you had something like that, you may not need to back it up. You may not need to make additional copies yourself. All of you know how much time IT spends making copies of data. It is a driver of cost and it is a driver of pain, right? So the benefit of having a system like this is not only is it off-site, right, but its protection level is designed, was designed from the get-go, you know, 15 years ago, when we started working on those systems, they were designed not to be backed up. They’re — it’s impractical, impossible to back them up. They are too large. The idea of taking a full view of the data set, in a multi-petabyte storage system, and moving it over to a secondary system, it’s close to insanity. It takes forever.
Fantastic presentation. But the concept “protected,” some of it is it done just purely to make sure you’ve got it available again. Some of it is to recover from other reasons. So protected isn’t just I backed it up, it could be version control, it could be a whole host of things that you need to keep, and those features need to be in there as well, so there’s more detail.
ANDRES RODRIGUEZ: Excellent. Very good, yes. So let’s dive one level down into that. So reasons you need to protect, typically. One is you need version control, right? You need to go back through versions. The other one is, you need offsite copies, like you need something that protects you. And then if you need to recover, it better be pretty darn quick in getting you the data again. And then, there’s a whole bunch of applications dependency backup and restore that you have to worry about. Right? That’s very important. Like if I’m backing up exchange, I want to make sure that I can go back and see, you know, the inboxes, and understand the data structure. That third — yeah, go ahead.
Compliance issues as well.
ANDRES RODRIGUEZ: Compliance issues as well, absolutely. And you have to be able to delete the snapshots, or delete the backups. Now, let me just say, of everything we just described in that paragraph, the only thing I’m not addressing, and remember this is because I told you, we are very much like NetApp for the cloud. Our model for backups does not go into understanding the data structure of the applications that are writing to us. Our model is a snapshot-based model. Which means we can do all the other stuff, so we can do versioning, and we can go back through versioning snapshots, right? But we would not be a proper — you know, we would not be a proper backup system for something like exchange, where you actually want to have backup exec and you want to understand the data structures. Just like a NetApp box alone wouldn’t be a good — exactly. But, you know, what — this conversation is a conversation we’ve been having, we’ve been debating, we’ve been arguing back and forth with NetApp for years, right? NetApp has been trying to tell the world for years, snapshots replication makes everything good and dandy. Nasuni is saying the same thing. We’re saying you don’t need to worry about backup anymore, as long as it’s not that special case. But the snapshots in Nasuni’s case are not snapshots to another file system, snapshots to another storage array. They’re snapshots to a completely different storage architecture. They’re snapshots to an object storage. That’s the big change. That’s what opens up the other great things here, which are very exciting.