Architecting for S3 at the Edge: Doing it Right

Ryan Miller discusses how Nasuni’s S3 at the edge delivers full SMB and NFS support with built-in data protection and cloud-native scale.

August 21, 2025  |  Ryan Miller

Before the industry-wide AI hype took off in 2024, Nasuni quietly introduced S3 at the edge back in 2023 — and the reaction? Honestly, it was pretty meh.

At first glance, I get it. There are plenty of options for S3 at the edge. But once you peel back the layers and examine Nasuni’s implementation — built on its proven cloud-native architecture — some distinct advantages begin to emerge. Especially when the challenge is S3 at the edge with multiprotocol support.

That’s exactly what I want to dig into in this post.

But First: Why S3?

Traditional NAS protocols like SMB and NFS have served the industry reliably for decades. They’re mature and dependable, like a well-worn pair of jeans. At least, if you’re a human talking to a computer.

However, with the advent and adoption of cloud, a new method of communication became necessary — one that could keep pace with machines talking to machines. From access methods (API calls) to use cases (application-driven storage), and down to nuances like statefulness and collaboration capabilities, S3 represents a fundamentally different approach to interacting with storage. It’s purpose-built for handling large objects, on a massive scale, in a programmatic way. In short, traditional NAS protocols struggled with machine-to-machine communication, and S3 was designed to solve that problem.

So, why would we want multi-protocol support for traditional NAS protocols as well as S3? Consider a few examples:

  • Architects and engineers often work with programs like Revit or AutoCAD, which use SMB/NFS. Meanwhile, cloud-based rendering engines or visualization tools use S3 to ingest and process those files at scale.
  • A video surveillance system or IoT platform writes footage or logs via S3, but security teams or operators want to review said footage via mapped drives.
  • Lab techs save large files to a network share via SMB/NFS, but an AI/ML pipeline pulls the data into the model via S3.

Now, let’s take a look at how S3 at the edge with multi-protocol support is typically implemented across the industry. Approaches generally fall into three categories: starting with S3 and then layering on SMB or NFS; beginning with SMB or NFS and later adding S3; or using a first-party S3 gateway solution.

Start with S3, Then Add SMB/NFS

Most products that begin with S3 and later add SMB or NFS tend to deliver limited and incomplete results. Because they don’t natively support traditional NAS protocols, they often depend on external gateways or third-party integrations added onto an object store, without the deeply integrated file system semantics that enterprise environments require. As a result, key features are frequently missing or poorly supported, including fine-grained NTFS permissions, Active Directory integration, robust ACL enforcement, and file locking, to name a few.

In short, while these solutions may perform well in terms of scale and object storage, they often fall short in delivering the reliable, enterprise-grade file access needed for multi-protocol edge use cases.

Start with SMB/NFS, Then Add S3

Another common approach in the industry is to take a well-established NAS platform and add S3 functionality on top. This can enable features like deep Active Directory integration and robust ACL support. However, because the underlying storage architecture is built around serving a traditional file system, it is not designed for the flat, massively scalable, and stateless nature of object storage.

True object stores rely on technologies such as sharding and erasure coding to deliver durability and scalability that go far beyond what traditional RAID or LUN groups can offer. The added-on S3 interface is often limited in capacity, comes with higher per-terabyte costs, may not match the high durability metrics of a true object storage platform, and may suffer from performance bottlenecks under heavy, API-driven workloads.

As a result, these solutions are often less suitable for cloud-native applications or edge environments that need to ingest and manage large volumes of object data.

First Party S3 Gateway

These solutions may be backed by object storage in the cloud, offering the durability and scalability benefits that object storage provides. On the front end, they often include some form of SMB or NFS support, but it tends to be limited. For instance, NTFS permissions might be enforced, but many of the advanced NTFS ACLs available on a Windows server are missing. Similarly, for NFS, features like POSIX ACLs or extended attributes are often not supported.

In addition, many of these solutions offer minimal data recovery features. In the event of data loss, the burden falls on the customer to manually search through versions to restore files. This might be manageable in a simple file deletion scenario but becomes unworkable in a ransomware attack where tens of thousands of files need to be restored.

The main goal for object storage providers in developing an S3 gateway is to make it easier for organizations to move data into S3. Building out fully supported NAS protocols is often treated as an afterthought, with little or no regard for things like Recovery Point Objectives (RTOs), Recovery Time Objectives (RTO’s), and Mean Time To Recovery (MTTR) following an attack.

How Nasuni Does It Differently

Nasuni breaks from the pack by doing things the right way from the start. It was built natively on object storage, with full protocol support — including SMB, NFS, and now S3 — delivered directly at the edge. This architecture allows Nasuni to combine the durability and infinite scale of object storage with a true global file system that delivers rich file semantics where users need them most.

Unlike bolt-on solutions, Nasuni fully supports NTFS and POSIX permissions, provides robust file locking for real-time collaboration, and enables seamless multi-site access through a single global namespace. Data protection is built in, with automated backup, ransomware mitigation, and fast recovery, so there is no need for third-party tools to achieve down-to-the-minute RPOs, RTOs, and MTTR.

The result is a truly unified (or NAS-UNI-fied) platform that delivers cloud-native scale and efficiency while maintaining enterprise-grade file access across SMB, NFS, and S3.

Choosing Without Compromise

There’s rarely a one-size-fits-all solution. The right approach depends on the specific problem you’re trying to solve. In some scenarios, S3 may be the primary requirement, with SMB or NFS access serving as a secondary need. In others, traditional file access via SMB or NFS might take precedence, with only limited object storage needs.

While a single use case might fit neatly into one model, most organizations deal with a variety of workloads and requirements. That’s where we stand out, eliminating the need to choose between protocols. With Nasuni, you don’t have to compromise. You get full-strength support for S3, SMB, and NFS in one unified solution, tailored to support all your edge and cloud-native use cases.

Tech: Distilled gets to the heart of today’s file data challenges without the fluff. In this series, Ryan Miller, Senior Solutions Architect at Nasuni, unpacks complex technical concepts with sharp insight and real-world relevance. From data security and file locking to the building blocks of a unified file data platform, it’s the kind of practical knowledge that sticks. If you want your tech smart, clear, and just a little bold, this series is for you.

Related resources

Ready to dive deeper into a new approach to data infrastructure?