Blog

The Hidden Link Between Capacity and NAS Performance Degradation in Traditional File Systems

One of the primary selling points of traditional file systems from vendors such as NetApp and EMC is their superior performance. I know this because I spent the last 20 years selling traditional file system solutions to large enterprises. When first deployed, their claim is valid. What vendors often fail to explain, however, is that NAS performance begins to degrade as these traditional file systems fill up.

The exact point at which NAS performance degradation becomes noticeable depends on several factors, including the workflow and the type of applications running on the file system. But at a certain percentage of total capacity, the file system becomes extremely slow. In some instances, the degradation starts to happen at as little as 75% capacity. Many of the enterprise IT leaders I speak with confess that they never wait for their file systems to reach this level. Instead, to avoid the issue, they initiate the process of ordering another disk shelf at 70% capacity.

The usable capacity of these file systems is therefore much lower than advertised. If you can only fill a file system to 70%, how much are you actually paying per TB?

NAS Performance Degradation and Forced Expansions

The NAS performance degradation issue is not specific to one vendor, such as EMC or NetApp, because the technical root cause relates to how traditional file systems distribute data. When an end user writes a new 10MB file, for example, the file system attempts to keep this data together in one chunk on the disk drive. This way, when a read request occurs, the disk seek is minimal, and latency is minimal, too, because the data is all in one place. NAS performance is as good as advertised. End users are happy.

As the file system fills up, however, it becomes harder to place files in the same location because there is less free space. The file system chunks that 10MB file and distributes it across the physical drive. When a read request comes through, the spindle or head must then jump from block A to B to Z to grab the individual chunks and reassemble the file. This causes latency to the end user and stress to the physical disk itself.

The problem is much worse than one user and one file. In a standard enterprise, there is never a single thread accessing the data on the disk. Instead, you have multiple threads and multiple processes running at the same time, involving tens or hundreds of users trying to access different files. Snapshots can introduce additional problems. Files become fragmented, forcing disk heads to jump to multiple locations to read all the blocks associated with a given file.

The more you fill up that file system, the more severe the NAS performance impact.

File Growth at Enterprise Scale

What about SSDs? Well, with SSDs, this problem doesn’t exist, absolutely. But in large enterprises, file volumes are too great for SSDs. There is just too much file data, and the data is growing too quickly. SSD cost is simply too high at scale.


“…to avoid NAS performance degradation, enterprises are forced to refresh their file systems long before they reach capacity.”


So, to avoid NAS performance degradation, enterprises are forced to refresh their file systems long before they reach capacity. IT calls their traditional storage vendor, begins the weeks-long purchase order process, requests an extra disk shelf, waits a few more weeks for it to be delivered, installs the new hardware, then waits again as the file system re-balances itself and rearranges the blocks. All through this period, performance is declining, and end users complaints are increasing.

This does not occur with cloud storage.

Total Capacity vs. Usable Capacity

When an enterprise buys 40TB of NAS capacity through a traditional file system vendor, the net capacity is really 30 to 35TB, as we’ve discussed. Once they reach that level, performance suffers so severely that the remaining capacity is not truly usable. But when an enterprise buys 40TB of capacity through a cloud file system provider, the file system can be filled to 100% without suffering any performance degradation whatsoever.

As the data stored in the file system grows, there is no need to order another disk at 70% capacity, then wait weeks for the expansion. Instead, the enterprise merely has to call the cloud storage vendor at 95% or more and request additional capacity — the cloud file system will see the increased capacity and expansion will be fast and simple.

This is possible because the structure of a cloud-native file system like Nasuni UniFS™ is completely different. Capacity scales in the cloud, not on local hardware. As a result, files and metadata can never outgrow the local hardware, or fill it up to such an extent that the performance suffers.  Performance will remain consistently high at all levels of capacity.

I would advise any enterprise IT group that has experienced NAS performance degradation to consider whether another traditional file system expansion is truly the best move. I would also ask, if these forced upgrades are happening at 70% capacity, if you’ve calculated how much you are really paying per TB of storage.

Enterprise cloud storage is no longer a futuristic, uncertain concept. For an increasing number of enterprises, it is the only viable way to grow.

Leave a Reply