Video: Explaining How Cloud Storage Works - Nasuni

How Cloud Storage Works

Overview

Nasuni CEO, Andres Rodriguez, explains some of the basic principles behind cloud storage and what makes it such a powerful component when used inside a storage controller.

“In order to understand what we’re doing to the storage controller, it is really important to understand how cloud storage works.”

Video Transcript

Andres Rodriguez:

It’s very important to understand what we’re trying to do to the storage controller. It’s really important to understand how cloud storage works. I’m talking about a very specific type of cloud storage. I’m talking about S3 and the object storage in systems like Asher, systems like Nirvanix, there’s a very — there’s one architecture that is using all those systems. I am not talking, by the way, about things like EDS.

That’s a completely different type of architecture that just happens to be in AWS, but is not really an object stored in the cloud. So, the way these systems work, all right, (inaudible) cloud, is instead of using a big expensive storage controller, you set up regular servers, right? Ordinary servers, and they’re tied together with Ethernet. They could be in one data center, they could be in multiple data centers. You set up a — so here you are in your happy building, you’re trying to use the — Earth, right? You’re trying to use the cloud, right? And what you’re going to do is you are going to put — you’re going to establish a connection to one of these servers, and you’re going to put it using HTTP, typically HTTPS. And you’re going to put in an object. And these objects, compared to block sizes, are actually massive.

These are typically, you know, one-megabyte objects that you’re pushing out into this, essentially, think of it like a web server. That’s all this thing is doing, is like setting — you know, it’s authenticating you, it’s doing a lot of other fancy stuff, but at the end of the day, it’s essentially accepting a one megabyte blob of information. As soon as it has it here, it very quickly looks around, and in a very, by storage standards, a lazy way, starts making copies of that object to other servers. It could even go across data centers. These copies, right, so this is the first step, second, third. These copies are all guaranteed to be perfect with a cryptographic key. That was the first — that was the aha moment of the object storage. You know, many, many years ago, we said wow, if you guarantee — if you have all the time in the world to build a big cache, like an MD-5, you can actually guarantee that the copies are perfect. And now, I can make copies in this [soup?] of servers. Now what is the benefit of this? If I’m only talking about eight servers, it’s not a big benefit. But if I all of a sudden saw — throw thousands of servers into the mix, or tens of thousands, the great thing about it is, first of all I don’t have to back this thing up, because there are copies of everything everywhere. Then, I can always decommission a server, and whatever was in that server, it is the job of the cluster to figure it out.

How do I bring the levels of copies back to the level that they were before? So you’re just finding copies, you know, you find existing copies and you make new copies of the data that was just lost. More importantly, if you’re a big web operation like Amazon, this thing grows by just adding more servers. When you add more servers, you’re adding not only storage capacity, you’re adding bandwidth and you’re adding all the processing that you need to figure out where are the copies? Like where are the copies, you know, how many copies do I have? Do I need to make more copies? Do I need to take copies down? That almost never happens. So, that’s the model. Now on the read side, it’s just as great. On the read side, you hit any one server, and that server has something called a distributed hash table that is basically a big DNS-like map that says OK, where can I find the objects in other servers in the system? Right? So it’ll go OK, there’s a copy of that object over here. Bring it out and deliver it. And this is typically a get over HTTPS as well.

OK? So, so far, so good. Awesome. No backup, infinite scalability, cheap as dirt because these things are just servers, right? The Achilles heel of this system, besides the fact that it’s out here and there’s huge latency, and you’re using a protocol that is not really designed for say, mass-like storage, right? The more important thing is the semantics are completely wrong. This is what’s called an eventually consistent system. If I want to change any one of these objects, I cannot do that in a consistent way. Because any change that I make in that object has to be propagated to all the other servers that have copies of that object. Some of those servers may not be online at that time. And therefore, guaranteeing that a change has happened in the system takes an awful long time. Now, I don’t think any one of you, because I know many of you — all of you write about storage, would be comfortable with a file system that tells you, you know what? I’ve accepted your changes, but give me some time before I reflect it back, in case you want to read back the files. That would be like the biggest joke in storage. We don’t have to come up with anything funnier than that. (laughter) Right? This model, and you know, Amazon has written about it, Amazon actually has done a great job basically documenting all the stuff. This is called the eventual consistency model. Right? It’s eventual because eventually, if you delete an object, it gets propagated through the system. Eventually, if you write something — and eventually, if you use something like Amazon [Virtual?], eventually the changes get around. But everything is eventual, it’s a slow moving. Think of these things like, you know, a humongous oil tanker. It just takes forever to get data to react to you. However, it’s got great properties. And so now, let’s look at what we can do with this.