Contact Us  |  Blog  |  Account  |  Support  |  DEMO

Testing the Clouds Part II – Performance

In an earlier post we discussed how Nasuni monitors the availability and reliability of our cloud storage partners through Cloudping.  We also developed another tool, Cloudbench, that determines the strengths and weaknesses of each cloud from a performance standpoint.

This helps us ensure that the Filer—and the Cloud Connector piece that communicates with the storage providers—is tuned to each vendor.  Intelligent algorithms enable the system to adjust on the fly, matching the load that each vendor can take at any given point.

Finally, Cloudbench allows us to push our partners to improve their performance.  In one case, we used it to show a vendor that their API was lagging relative to our other partners, so the company developed a new one, with far better results.  (See spider graph below.  Relative to the older API, the improvement is 2x to 5x across the board.)

Before and After

So, how does it work?  Cloudbench runs tests from multiple locations to multiple clouds.  We read, write, and delete a range of file sizes, from very small to very large, with a range of thread counts.  Clouds scale up beautifully; they are designed to be used by millions of people, so it would not make sense to write a file, wait, write another, wait again, etc.  Our storage partners deliver the best performance when you run a number of processes in parallel, instead of one after the other.

Still, this is not simply a case of the more, the better.  We use Cloudbench to determine the optimal thread count for each file size with each vendor.  We start small, then ramp up the thread count until we get to higher and higher numbers.  At each size, and each thread count, we look at how each vendor performs, measuring reads, writes, and deletes.

Next, we compare the results.  With Cloudbench, we can see which vendors are ideal for particular file sizes.  Some clouds do really well with multiple sizes, while others don’t really hit the right benchmarks until you get up to larger files.

At some point, we hope to be able to use these findings to suggest certain vendors based on a customer’s expected usage patterns.  Given a corpus of data, Nasuni could recommend the provider(s) that would deliver optimal performance.  If your company deals mostly with tiny chunks of data, we might recommend Provider A, but if you deal primarily with massive files, we might suggest a different vendor.

For now, though, we are using the tool to optimize the Filer, and to encourage some of our partners to improve their numbers—as described above.

The Nasuni Filer is our business—we stressed this point in the last post, too.  But if we can use our monitoring tools to push the clouds to evolve better practices, the Filer and, in turn, our customers, will only benefit.