Once in a Decade, Three Times in Ten Years: Isn’t It Time to Ask a Different Question?

Hardware price spikes and supply chain shocks are becoming the new normal. Nasuni’s CEO Sam King discusses how software-defined, cloud-native architecture gives enterprises the structural resilience hyperscalers already have.

April 29, 2026  |  Sam King

Something has been building in the industry press for months, and last week it crystallized.

Chris Mellor at Blocks & Files, published an open letter from the CEO of Everpure to their customers. I encourage you to read it. It’s transparent, honest, and frankly, it takes courage to write. I respect it.

But it also made me think hard about what we’re all really saying when we talk about this crisis.

The third “once-in-a-decade” supply chain crisis in ten years

The letter describes input costs rising between 300% and 900% since mid-2025. DRAM up. NAND up. SSD lead times stretching past 40 weeks. And the framing is familiar: a “once-in-a-decade” supply chain disruption.

And it’s not just one company sounding the alarm. The Everpure letter sits alongside months of reporting from The Register, Network World, and IDC telling the same story from different angles. The Register puts DRAM contract prices up 90-95% quarter-over-quarter in Q1 2026, with NAND flash up another 55-60% in the same window. Network World captured Samsung’s own president telling Bloomberg this is “an industry-wide reality.” IDC has been even more blunt: “this signals the end of the era of cheap, abundant memory and storage, at least in the medium term.”

When the analysts, press, and CEOs of the largest hardware vendors are all describing the same picture, it stops being noise. It’s the signal.

Here’s the thing. The letter itself notes this is now the third such disruption in a decade. COVID in 2021. Liberation Day tariffs in 2025. AI-driven semiconductor scarcity in 2026.

At what point does a “once-in-a-decade” event happening three times in ten years stop being an anomaly and start being the new normal?

Why software-defined architecture changes the equation

I don’t ask that to score points. I ask it because I think it’s the right strategic question for every CIO, CDO, and infrastructure leader to contemplate. When the competitive differentiation being offered by hardware-based storage vendors is the degree to which they’re raising prices on you — whether 70% or something more — maybe it’s time to ask whether hardware-dependent architectures are the right design point going forward.

The letter cites software capabilities as a key buffer against hardware cost pressures. I agree with the logic completely. Software is the answer. Nasuni takes that conclusion to its logical endpoint. We are entirely software-defined, running on standard cloud and on-prem virtual infrastructure, like AWS, Azure, and Google Cloud. When NAND prices spike, we don’t absorb it and pass some of it on. We’re structurally insulated from it.

Why migration isn’t the blocker you think it is

Whenever I have this conversation with infrastructure leaders, this is the moment things shift. The architectural argument lands. The math on a five-year TCO holds up. And then the real question comes out: “OK, but we can’t exactly rip out our NAS this quarter.”

I get it. I’d just offer this. The fear of migration is, in my experience, the single most expensive misconception in enterprise file infrastructure. It’s the reason organizations absorb another refresh cycle, with all of its cost, disruption, and now supply chain exposure, rather than take a path that carries none of those. Our architecture was deliberately designed to make this assumption false. Your users don’t experience a migration. They keep working.

And they’re not doing it alone. Our Professional Services team has guided some of the world’s largest enterprises through this, across thousands of sites and petabytes of data, with a playbook refined over hundreds of deployments. The architecture removes the risk. Our team removes the uncertainty.

The point I want to leave you with is this: You don’t have to wait for a refresh to evaluate an alternative. Right now, in this market, waiting is the most expensive option on the table.

What this means for your data infrastructure

For the industries we serve, architecture, engineering and construction, energy, media and entertainment, manufacturing, this isn’t an academic debate. These are teams collaborating on massive unstructured file datasets across distributed global sites. They need their data infrastructure to be resilient, not just to cyberattacks and outages, but to supply chain shocks they didn’t see coming.

And increasingly, they need that same data to fuel AI — not as a future roadmap item, but now. The enterprise AI story is a data story. And the data story, for most enterprises, is a file data story. That’s where we live.

Three crises in a decade is a pattern, not bad luck. If you’re sitting with this question yourself, I’d point you to a recent fireside chat we hosted with Glen Ridnour, VP of IT, Huitt-Zollars, Inc., where he walked through exactly how he was thinking about hardware disruption, what made him rethink his file foundation, and what’s changed since. It’s the conversation I wish more enterprise leaders were having out loud.

Watch the fireside chat.

And if you’d rather skip ahead and just talk, our team would love to show you a better way to manage your unstructured data.

Learn more about the latest developments in data infrastructure

Resource CenterRequest a demo