The Enterprise Architecture AI Can’t Live Without

Nasuni’s Jim Liddle discusses how AI projects won’t work without the right enterprise data infrastructure.

September 16, 2025  |  Jim Liddle

A recent report published by MIT, The GenAI Divide: State of AI in Business 2025, reports that 95% of GenAI pilots fail. It’s not because the models don’t work, as the AI demos are compelling. The business case is rock solid. The problem isn’t AI talent or the algorithms; it’s the data infrastructure.

More specifically, it’s the fact that enterprise AI projects hit what I call the “production wall” — that moment when the AI deployment team realizes their prototype needs to access data from seven different locations, across multiple servers, with varying levels of governance, security, and availability.

This is where most unstructured data AI initiatives collapse. Not because the models are wrong, but because accessing millions of documents, images, videos, and datasets consistently across distributed locations becomes an impossibly complex data management problem.

The File Access Reality

Take a computer vision use case for quality control. It needs access to thousands of product images stored across manufacturing sites, historical defect photos in other regional offices, and reference datasets that are stored on old NAS servers. Today AI deployment teams are dealing with different file systems, inconsistent naming conventions, varying access permissions, and network latency that makes on-demand access to unstructured data impractical from distributed locations.

Teams can end up spending months identifying file locations, copying files between systems, creating multiple versions of datasets, and building custom scripts just to give the models consistent access to the files they need to satisfy the use case. Every new location means YAIP, or yet another integration project.

The Single Namespace Problem for Files

What these AI projects really need is a way to present all their unstructured file data (regardless of where it physically resides) through a single, unified file namespace. So, for example, models can access images from Boston, reference documents from London, and validation datasets from Singapore as if they’re all in the same directory structure… Because in a unified namespace, they are!

This need will become even more critical as agentic AI systems start operating on file repositories.

Why File Data Resilience Matters for Agentic AI

Traditional AI models consume files during training, inference and RAG based initiatives, but agentic AI systems will actively manage and modify file repositories. They might reorganize document collections, generate new file contents, or update datasets based on other steps in the agentic workflow.

When an agentic AI managing your content library decides to restructure thousands of video files based on usage patterns, or when it generates new billing invoices from orders located in a file directory, it’s not just reading your files. It’s creating, moving, and modifying them at scale.

If those files become corrupted, inaccessible, or inconsistent across locations, it makes operational decisions based on incomplete or corrupted datasets, potentially damaging the very file repositories your business depends on.

The Foundation Everything Else Depends On

Most companies figure it out too late: you don’t fix broken AI by throwing more AI at the problem. The real bottleneck isn’t the model; it’s the data foundation underneath. Without the right file data architecture — the system where day-to-day operational data actually lives — you’ll never get unified access across locations, never guarantee data integrity as AI runs at scale, and never grow without sinking endless hours into custom engineering.

This is the hard truth: enterprise AI only works if the data foundation works first. At Nasuni, we deliver that foundation. No hype. No hand-waving. Just fact.

Beyond the Prompt is where vision meets velocity. Authored by Jim Liddle, Nasuni’s Chief Innovation Officer of Data Intelligence & AI, this thought-provoking series explores the bold ideas, shifting paradigms, and emerging tech reshaping enterprise AI. It’s not just about chasing trends. It’s about decoding what’s next, what matters, and how data, infrastructure, and intelligence intersect in the age of acceleration. If you’re curious about where AI is going — and how to get ahead of it — you’re in the right place.

Related resources

Ready to dive deeper into a new approach to data infrastructure?