The 4 Requirements of a Cloud-Era File System

February 08, 2018 | Anne Blanchard The 4 Requirements of a Cloud-Era File System

File systems aren’t going anywhere. They’re a time-tested, proven way to organize, store, protect, and share unstructured data. But file systems must evolve.

In the block era of the 1990s, their main purpose was to store and organize data. When NAS revolutionized the storage world, file systems like NFS and WAFL managed a wider variety of file types, and they were tasked with protecting that data, too. Next, we moved into the scale-out era, when innovative file systems like Isilon OneFS could store, organize, protect, and efficiently scale unstructured data across clusters.

Yet the scale-out era didn’t last very long. A massive increase in the size and number of files stretched these legacy, controller-based file systems past their limits. Meanwhile, enterprises have developed a whole new set of demands. Teams in different parts of the world need to efficiently collaborate on the same files. Offices on different continents must operate as if they’re neighbors. End users want to be able to access their files from anywhere. And companies expect all their files to be protected and easily recoverable.

Object storage – the new limitless pools of capacity now offered by leading private (on-premises) and public cloud storage providers – makes all this possible. But not on its own. And legacy file systems, being designed for traditional hardware, cannot leverage the strengths of object storage. We’re in the cloud era, and what enterprises need now is a cloud-era file system.

Storage Switzerland analyst George Crump and I had a fascinating conversation on this topic as part of a recent webinar. The on-demand video offers a deeper dive, but here’s a quick summary of what we see as the 4 requirements of a cloud-era file system.

1. Global File Access

This is an absolute must for distributed or global enterprises. Whether the end users are engineers sharing Revit models, designers sharing CAD drawings, creative professionals sharing Adobe Illustrator, Photoshop, and InDesign files, or financial analysts sharing Office docs, they all need to be able to access their files from any location, at any time. In this age of globalization, companies must be able to leverage all its talent, regardless of location,

2. Independent Scaling of Capacity and Performance

Private and public cloud storage offers enterprises the chance to scale capacity without ever having to undergo another expensive hardware refresh, transforming storage from a capital expense into an easier-to-provision, pay-as-your-grow operational one. But an effective cloud-era file system has to allow both performance and capacity to scale independently. As storage scales in object storage, performance must scale at the edge, and not through a one-size-fits-all approach that places the same full-sized hardware at every location. The physical or virtual appliances that manage file access and performance at each location must be matched to the performance and capacity needs of that location.

3. Low-Latency Cloud Integration

The key to a successful cloud-era file system is to integrate with object storage without introducing latency. Constantly going to the cloud for files would be slow, and would impact users and the business. Instead, a cloud-era file system should leverage local appliances, which could be physical or virtual, to cache the frequently accessed files. This not only maintains high levels of performance, but enables enterprises to maintain control over security. If files are accessed locally, then the file system can integrate with standard Active Directory and LDAP-based access controls, encrypt data before it moves to object storage, and generally ensure high performance and strong security.

4. Near-Zero RPOs and RTOs

Finally, enterprises want near-zero Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs). Both are possible with cloud-era file systems. Since the file system scales in object storage, and not on local hardware controllers, an unlimited number of immutable, off-site versions of every file can be securely preserved, giving enterprises powerful protection against Ransomware and other attacks. Because all versions are stored and managed under one global file system with caching at the edge, recent copies can be restored in minutes.

As Crump and I discuss, object storage in a private or public cloud is the ideal way to solve the file growth challenges of modern enterprises and address their evolving business needs. But something else is needed between enterprises and their cloud storage to meet the needs of IT, users, and the business. What’s needed is a cloud-era file system. We review this in more detail in the white paper – check it out and let us know what you think.

[rev_slider alias=”white-paper-cloud-era”]

Related Posts

April 24, 2024 Nasuni Featured as Google Cloud Assured Workload Partner

Bobby Silva shares Nasuni’s efforts to aid evolving global data sovereignty requirements and compliance regulations in addition to being recognized as a Google Cloud Assured Workloads partner.

, , , , , , , , ,
April 22, 2024 The Surprising Environmental Impact of Hybrid Cloud Solutions

Lance Shaw shares insight on how switching to hybrid cloud solutions can be positive for both the enterprise and the planet.

, , , , , , ,
April 17, 2024 Three Impediments to AI Success

Andres Rodriguez shares why enterprises need to get fit for AI and the top factors prohibiting their AI success.

, , , , , ,