Almost a decade ago, in a disruptive 2011 WSJ piece, Marc Andreessen announced, “Software is Eating the World.” It was the cry that launched a thousand software startups and the beginning of the end for the juggernauts that had dominated the data center with their highly evolved hardware boxes.
Today, technology giants like HP and EMC are no more. Their aging product lines have long been under attack from every software startup wanting a piece of their marketshare. Their gleaming hardware boxes, once the pride and joy of the technology giants, have become an encumbrance, an obstacle preventing those tech companies and their customers from taking full advantage of the cloud. Infrastructure, with its dependency on “big iron,” has been a hard nut to crack. But in the next decade the data center as we know it will cease to exist.
Software is eating the data center. A cloud-only strategy—something more and more CIOs are advocating—requires an all-software data center. The cloud wants nothing to do with custom, proprietary, expensive hardware-based infrastructure. Any storage, network, or security tool that cannot be run as a service or as a virtual machine has become obsolete in this cloud-first world.
So how did this happen? What does it mean for organizations trying to execute a cloud-only strategy in the coming years? In this post I plan to address these questions, and how they will drive the future of Nasuni technology.
How We Got Here
Over the last decade, virtualization, fueled by the unstoppable power of Moore’s Law, contributed more than any other technology to turning every server into its software equivalent, the virtual machine. That transformation set the stage for the mass migration of virtual machines to the cloud. A scalable, rock-solid infrastructure for running virtual machines is now a pillar of any cloud infrastructure. This made it possible for any server that can fit in a virtual machine to move to the cloud.
Two things can make this journey harder than advertised. First, servers that are larger than what a VM can accommodate, especially in the cloud, where many of the customizations available to on-premise deployments are sacrificed for the sake of optimizing the environment as a whole. The second limitation: guaranteed SLAs about the performance of those VMs. Surprisingly enough, security turned out to be mostly a non issue as software-based security stacks migrated to the cloud. A few industries have remained stuck in the fantasy that is private cloud, noticeably financial services and health care, because (a) they can afford to and (b) someone terrifyingly high up in the organization has issued the mandate that, “We are just not going to the cloud.”
But what to do about performance or servers that don’t fit? The simplest way to make something big fit into something small is to split up the big thing and distribute it among the small containers. That is exactly what happened. File systems are notoriously large things in the data center. Depending on your business, they can be the largest single data structure in the data center.
I often tell our clients that the magic of UniFS®, our cloud-native global file system, is that it takes the file system out of the VM, our Nasuni Edge Appliances, and distributes it into the object layer that is cloud storage (e.g. AWS S3 or Azure Blob Storage.) That’s how UniFS avoids crowding your data center or local VM. A typical single instance of UniFS “lives” distributed across thousands of object-store servers, in the cloud, where capacity is virtually limitless and cheap. We have seen the same approach in databases. All of the major transactional databases and analytics engines (e.g. AWS Aurora, IBM Cloudant) can now be distributed among query-crunching servers capable of splitting requests and aggregating results. We have all pretty much spent the last decade working on how to get that right and it’s all mature technology now.
Performance turned out to be a devil, but there are reasons to be optimistic as we race into this new decade. The first wave of performance SLAs came from being able to guarantee raw power into the VMs. This is tricky—actually, very tricky at scale in multi-tenant environments—but not impossible to do. We are pretty much there now, and all major cloud providers have offerings that guarantee a certain level of performance to the VMs. Those levels may not be what you need today, but cloud providers understand that this is an important competitive advantage (and a great way to make money) so the bar is rising rapidly.
But there is a more pernicious problem with performance having to do with physics and the speed of light.
Desks, Chairs, and Cheetos
Towards the end of the last decade we started to see some of the largest companies in the world commit not only to the cloud, but to cloud-only strategy. “We want out of the data center business altogether”—that’s the mantra rapidly spreading through the top echelon of global companies. With it comes the expectation that data will be available globally, fast. The problem when you take a simplistic lift-and-shift approach to the cloud is that your end users are not going to the cloud. End users still need desks, chairs and Cheetos. They live in the physical world. Any application that is sensitive to latency will very quickly become a bad to terrible user experience as the distance to the servers in the cloud increases—we can’t outpace the speed of light. Any sort of direct access to files falls squarely in this category, and a major use for our Nasuni Edge Appliance is to keep that file access point close to the end users, on-premises, while the file system lives in the cloud.
Another option is to move the end-user desktops to the cloud (aka VDI), so that they can be right on top of the servers in the cloud. There is a pretty good chance that this will finally be the decade of VDI. The large public cloud providers are solving the two major obstacles that have held VDI back in the past: performance SLAs and a true global footprint that allows your VDI farm to be everywhere it needs to be so that end users are never more than 20ms from their Virtual Desktops. The cloud data center is not just a data center run by someone else (the MSP model). Cloud is a global architecture for the data center that spans the world with its many tentacles. Andy Jazzy’s announcement of AWS Outposts at last year’s re:Invent is a strong sign of what’s coming.
Cloud will be where your business needs to be with a common architecture tying all your instances into one.
The Resistance is No More
We have seen a dramatic increase in demand for all-cloud deployments where VDIs front the end users in different regions and UniFS, our global file system, is used to synchronize the files across all of these regions. Because of this, speed of light limitations no longer will hinder the wide adoption of VDI. That is why every major provider has an aggressive VDI-in-the-cloud offering. I was looking through some old drafts of stories we never published, and I wrote this sentence nearly seven years ago:
“Cloud storage heralds a new class of storage systems that can serve as the backbone for true global data synchronization, while being able to absorb vast amounts of data on demand.”
I never published that piece, in part because the market wasn’t ready for this notion of global data synchronization via the cloud. In those years, many businesses were still scared of the cloud. They were hesitant to move their unstructured data and apps to the object store. Over the years, this resistance has waned.
Now it’s gone.
When cloud becomes THE choice, the first thing that occurs to any sensible organization is that they need at least a couple of options. When cloud was an optional strategy reserved to the ultra-cool millennials that run DevOps, it was possible to indulge in relying on a single provider. Cloud services have become too mission-critical for that to still be the case. Instead, every large organization seeks a two- or three-cloud strategy.
There is now a focus on IT tools that ensure portability and simplify access to cloud services across providers. Many of our clients have their files in one provider, but are accessing those files from other clouds where the right services to the right locations are available. These cloud services are more scalable, cheaper, and easier to integrate than anything that exists in the data center today. That is not to say that every service that IT came to depend on during the age of the data center exists today in the cloud. However, a mirror world of services is materializing quickly to cover the gaps, and the new services all have cloud scale built into them.
Analytics, AI, and the Future of Nasuni
Those lucky early adopters of cloud infrastructure are no longer concerned with just storing their data and running servers. They understand that they are sitting on a potential gold-mine of insights. As the old data center dissolves into the cloud, that data is becoming big data, ripe for analytics. With Nasuni 8.7, we will be launching our Analytics Connector, which will make it easy for companies to run the advanced analytics and AI tools of their choice on files and unstructured data stored in the cloud, extracting incredible value.
In 2019, we celebrated our tenth anniversary as a company, and that first decade could not have ended on a more exciting note. The shoulder-to-shoulder work we have done with our major accounts has paid off. The many hours our engineers and support teams have dedicated to stress testing our technology help our clients get more out of their files. Distributed teams have only made Nasuni better—as a technology and as a company. Yet this all comes back to our original innovation, the core idea that drove us to found this company.
We designed and built a file system for the cloud.
Andreessen was spot on. Software will eat everything. Software is destroying traditional storage companies as it devours the data center. The shift has driven the success and accelerated the growth of our company. Our file system was built for this transition. Our company is built for this next decade and beyond.
What can you expect from Nasuni in the 2020s? First of all, the basics. We will continue to deliver the fastest and most compact software-based NAS VM that taps into the unlimited storage capacity of cloud storage. Global synchronization of files is critical to our clients and so we will continue to push the edge around number of sites and propagation speeds. Today, UniFS is best-in-class for a global file system. It can handle around 50 sites with high levels of read/write contention. We expect the need to increase that number ten-fold in the coming years as firms move all of their operations to the cloud and start to understand and capitalize on the benefits of having a true global infrastructure. Also expect our Analytics Connector to grow into a family of tools that connect UniFS to every relevant service in the rich cloud ecosystems: search, image recognition as well as analytic and visualization tools for file system metadata. We want our clients to better understand their files and be able to gain greater insights that can help their businesses. Finally, expect lots of cross-cloud operations to be enabled at scale. We already allow our clients to create UniFS volumes or instances in multiple clouds and manage them centrally. We want to take that to the next level and allow cloud-to-cloud migration of volumes at the object-level, as well as volume splitting/merging and pruning of file system versions—all done at the scale of cloud.
Earlier this year I was chatting with a good friend of mine who works at one of our big iron NAS competitors. “Why are you guys not trying to build a global file system, when every one of your major accounts wants this functionality?” I asked. He answered, “It would be too painful. We have concerns about how hard it would be to roll it out and support it.”
Well, here at Nasuni we were visionary or stupid or stubborn enough to do it and I’m damn proud we did. Bring on the 2020s. We’re ready.