By Andres Rodriguez on March 15, 2012
Building storage systems that scale forever
After giving a public lecture on astronomy a brilliant English scientist was challenged by an old lady, “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist, using a superior scientific tone asked, “What is the tortoise standing on?” Without missing a beat the old lady replied, ”It’s turtles all the way down!”
The problem with building truly scalable data storage systems is that we insist on having a flat plate somewhere that supports the whole thing. It is our adherence to systems and IT operations that work on data sets in full which cause IT managers to break out in hives during data migrations, backup or replication. These operations work well for small data sets but are bound to break at scale. It begs the question: why not try to contain the data set by only targeting the incremental changes to the data?
Here is what you need to do:
1. Stop trying to fit all of your data in a single box, even a large one. Instead, distribute the data set among many small boxes. This avoids having a massive migration project when even that super box fills up. As data accumulates, you simply add more small boxes and the data is distributed so that these smaller units can fail or be retired without risking data loss. Clustered file systems took a first stab at this problem but they still need to be backed-up. The object stores that power cloud storage are the culmination of this design path and require no backup.
2. Stop making backups. Instead, take snapshots that capture only incremental changes in the data. The problem with backup is the need to periodically capture a full copy of the data and the need to go back to that baseline before any incremental can be applied.
3. Stop trying to replicate the entire data set. Instead distribute the changes only. Distribution of large data sets across the WAN is slow, error prone and expensive. Instead focus on distributing only the changes to your data. This is done inefficiently by WAN optimizers because they are limited to capturing the changes at the network protocol level. Storage systems have until recently been limited to performing wholesale replication or nothing. A much more efficient approach is to cache at the storage layer by gathering intelligence about incremental changes to the data at the file system.
You get the flow. It’s incrementals, incrementals all the the way down. ESG’s Steve Duplessie recently wrote a fire and brimstone open letter encouraging storage vendors to begin thinking outside of their hardware boxes and consider the severe disruption that is heading their way, fueled by customer frustration with file growth and the storage services that can turn all that pain into an ironclad SLA. Storage services is not just the old storage gear managed by someone else. It is an incremental-all-the-way-down technology that grows with your data. It is not just scalable. It is growth proof. It can even carry the world on its back.
Image source: photobucket – turtles-1-1-1.jpg
Andres Rodriguez Andres Rodriguez brings to Nasuni the energy and experience of a visionary entrepreneur with several successful companies behind him. He oversees sales, marketing and strategic partnership development—and is the face of Nasuni.
Nasuni is Cloud NAS, a complete storage solution leveraging the cloud as a primary storage component built-in to a unified, high-performance storage system. See how it works in this short video.