Contact Us  |  Blog  |  Account  |  Support  |  DEMO

Disaster Recovery with Nasuni

What if you could restore end user access to files within minutes of an outage, a system failure, or a massive storm? And what if you didn’t need an expensive DR site to make it work? This isn’t an IT pipe dream. It’s cloud disaster recovery with Nasuni Cloud File Services.

Video Transcript

In a traditional file infrastructure, customers generally give a lot of thought to disaster recovery. What are we going to do if a NAS device or a file server – or even worse the entire data center goes down? They have to – See like that. They have to fail over to a co-location facility. It is an expensive time-consuming process to do that. It’s like a plus two strategy, you have duplicate infrastructures at the main site and the DR site to be able to fail over the infrastructure. So it’s expensive to maintain that – having two of everything in case the first one fails. It’s time-consuming to test this. People generally have to take production offline or even spend a few days testing the disaster recovery procedure to make sure it works and it requires a fail over to the Co-lo and then a fail back to production when you’re done. It’s something that people don’t enjoy preparing for having to spend money on, but it’s often necessary in a traditional file environment. With Nasuni cloud file services, disaster recovery is far different. It’s much more cost effective and easier to manage. See, our architecture is inherently different than traditional file infrastructure our data, our versions and especially our appliance configurations are located in the cloud. So if you were to have a site failure, or a Nasuni appliance go down, you haven’t lost data it’s simply a matter of saying, “How quickly can I connect my users back to that cloud service where their data is located?” So what we simply do is, walking through a very quick wizard, we bring down that appliance configuration living in the cloud and we apply it to a new virtual template on site. It takes about five minutes. So if you’ve emailed the user base to say hey guys we’re having a period of downtime you can then email them 10 minutes later to say your shares are available and your data is accessible. The users will be pretty surprised. The approach we take is that we rehydrate the file system with the metadata to that virtual appliance, so the data doesn’t actually have to be resident locally. Your goal is to give people access to data as quickly as possible and then we let the users tell us what they need at that time to repopulate the cache. See a user could never consume 10 or 20 or 50 terabytes of data all at once, so we present the whole data set virtually and let them tell us what they need.