HiMSS Webcast: Solving the File Growth Problem

Two powerful trends are making IT organizations rethink their file storage strategies. These forces – relentless data growth and a desire for less hardware to manage – are driving files out of the data center and into the cloud. Storage in the cloud is bottom-less and much more affordable than anything built to run exclusively inside the data center.

Austin Radiological Association is projecting their storage consumption of 2D and 3D digital mammography to double by year end. By 2020, they will have over 200 terabytes of images. Watch this session to hear how the company addressed their unstructured file growth problem, while also delivering local performance and mobile access to end users, through Nasuni Cloud-Scale Enterprise File Services. Hear Austin Radiological Association’s CIO detail the benefits of adopting a storage strategy that combines local edge appliances with Azure storage to provide global access to data, with unlimited scalability and local performance.

Video Transcript


And thank you for attending today’s HIMSS Industry Solutions webinar, “Storage: Solving the File Growth Problem,” sponsored by Microsoft. Austin Radiological Association is projecting that their storage consumption of 2‑D and 3‑D digital mammography — to double by year-end. By 2020, they will have more than 200 terabytes of images. During the session, you’ll learn how the company addressed their unstructured file growth problem, while also delivering local performance and mobile access to end users. Our speakers for today’s event are Todd Thomas, chief information officer at Austin Radiological Association, Warren Mead, Vice President of Channel and Business Development at Nasuni Corporation, and Jeff Luna, Azure Technical Solutions Professional at Microsoft. So without further ado, I’d like to hand it over to Jeff to begin our presentation.

Jeff Luna:

Great. Thank you, Mike. My name is Jeff Luna. I’m an Azure Technical Solutions Professional, at Microsoft Corporation’s US Healthy and Life Sciences Team. We’d like to welcome you to today’s webinar, brought to you by HIMSS, our partner Nasuni, and our featured healthcare customer, Austin Radiological Associates. Today you’ll hear how the Microsoft cloud has empowered our partners to provide new solutions and a secure environment to their customers and patients.

To begin, Microsoft’s commitment to health and life sciences has evolved into over 1,000 employees worldwide, who focus on solutions for the health and life sciences industries, industry organizations, and the patients they serve. We leverage our depth of experience with our partners and customer teams to help envision new solutions for the patient journey.

Today customers are faced with new challenges in compliance, regulation, and increasing storage requirements. As customers like Austin Radiological Associates have discovered, new strategies to continue with their business model, while providing the flexibility and security to store and retrieve large amounts of data, such as documents, studies, and media files, through highly scalable, available, and durable solutions… With our visibility to the industry, customers who solve storage solutions soon adopt additional workloads.

These workloads helped our customers with the new solutions that they’re looking for. Also, our customers’ increased productivity create true hybrid and hyper-scale enterprise solutions.

Azure meets a broad set of international and industry-specific compliance standards, such as ISO 27001, HIPAA, FedRAMP, SOC1, and SOC2, as well as country-specific standards. Microsoft was also the first to adopt the Uniform International Code of Practice for Cloud Privacy, ISO/IEC 27018, which governs the processing of personal information by cloud service providers. Azure offers customers a HIPAA business associate’s agreement, stipulating adherence to HIPAA’s security and privacy provisions. By leveraging solutions from Nasuni, customers like Austin Radiological Associatio‑‑ have found that storage in the cloud is bottomless and much more affordable than anything built to run exclusively inside the data center. This leads to more efficient services, to better enable patient outcomes, which enhance lives.

Now I’d like to introduce Todd Thomas, chief information officer of Austin Radiological Associates. Thank you.

Todd Thomas:

Thanks, Jeff. Looks like we have a polling question up on the screen.


Sure. Todd, I can — I can ask that. You know, I’d just like to ask the audience to please identify your current file storage solution. And we’ll give a few seconds for those results to drop in. And then, Todd, you can maybe speak to them.

Todd Thomas:

Yeah. So it looks like — yeah — about half… A little bit more coming in. That’s actually interesting. Looks like got a few on EMC and then we got some other players in the mix.

So we’ll talk a little bit about ARA has done over the last 15 years or so that we have been on PACS. And you’ll see kind of the progression of storage methodologies that we’ve used over the years, before settling in on the cloud.

As an organization, ARA has been around since 1954. We are privately owned by a radiologist. And we run the gamut of diagnostic imaging services, from MRI to CT to PET to ultrasound to fluoro to bone densitometry and both 2‑D and 3‑D mammography. We also have a 3‑D reconstruction lab. For most of central Texas, all of the imaging studies are read and stored in our PACS system. We have one hospital system that does not use us for image management. And so, while we read 1.3 million exams annually, about — a little over 900,000 of those are stored digitally.

In terms of our department, we have 43 people in the department. What’s important to note is that little group on the far right. The ICT Group is responsible for infrastructure and storage management. There are four people in that group, only of which two are responsible for managing our storage environment.

So it was in 2001 that we made the move to digital imaging, in all but mammography, and deployed our first fiber-channel storage area network. We went with a high-performance array and overall design, which helped us create a new business model, becoming the first provider in central Texas to offer PACS as a service to those healthcare organizations that had not moved to PACS yet. It was the service model that helped fuel the growth of our imaging archive. And 2003, which was shortly before our first client came aboard, we had to already move into a larger array. And it was at this time we also set up our secondary data center, so that we could replicate our environment, for DR purposes. It’s important to note that we made a decision early on not to archive any studies to tape. Our radiologists felt that the retrieval times were too slow and wanted all current studies and comparison studies available to them in under three seconds. And so those performance requirements drive every decision we make in our storage architectures, to this day. In 2004, we needed more space, so we moved to a lower-cost, content-addressable storage array and implemented some basic image lifecycle management strategies, where we moved our images from our SAN to our CAS after 30 days. This too is replicated to our secondary data center. By 2006, we had grown over to 58 terabytes of usable space under management. In 2007, we made the decision to move into digital mammography. And this was the first explosion of our storage use. And so we needed an array that could be quickly deployed and quickly expanded, without the long sales and implementation cycles typically associated with larger NAS or SAN deployments. So we deployed a clustered-storage solution. Clustered storage seemingly appeared to be the darling of media organizations at this time, because of its ability to serve large filesets quickly. So in addition to our SAN, which was now just running our server farm, and our CAS, which was running all imaging but mammography, we now had a third storage platform, solely used for digital mammography. By 2008, we were consuming space at 24 terabytes per year. And in 2010, we went through an image migration and moved off our CAS platform entirely. Our ILM software was becoming a support headache and maintenance was about to start on our CAS array. So we expanded our clustered-storage array and migrated 150 terabytes to it. By 2014, we were consuming 36 terabytes of space annually. So from an imaging archive standpoint, today we have two clustered-storage arrays, that are replicated between one another, across a 10‑gig Ethernet link across our two data centers. Our storage area network is gone. Now that we are 100% virtualized on our server farm, those VMs are running on a hybrid storage array, solely designed for and dedicated to our virtual infrastructure. But our image archive today is hovering about 90% utilization on our clustered-storage array.

So what’s been driving this growth, aside from the number of clients that we support today on our PACS platform, is really, as the modalities improve in their fidelity, those file sizes are increasing. So as you can see on the slide there, a 16-slice CT, on average, for us we 18 megabytes. A 64‑slice CT jumped up to 23 megabytes. Our 3T MRI is about two and a half times what our 1.5T MRIs are. We recently began storing cine loops and color flows for our ultrasound unit. But digital mammography and the move to 3‑D has really increased the amount of space we need available to us. We have seen that our average file size for a standard mammography image was 20 megabytes but that 3‑D mammography file is close to 400 megabytes. And we ingest data at around 25 gigs per day. So presuming a four‑ to five-year — the lifecycle on storage, we would have to migrate an additional 55 terabytes, on top of the 500 terabytes under management today, in 2018 and, by 2022, it’d be a migration of an additional petabyte. And this is just on mammography. This is not including any CT or MR storage at the time. And then we’d be growing at a petabyte per year, the longer we wait to migrate. By 2024, we are projected to be ingesting 3 petabytes annually on mammography alone. Now, I can try and reclaim space by deleting those ten-year-old mammography exams from 2014 but we have some clients under our management that do not wish to delete any data.

So with the imagery retention requirements, both medical, legal ones and client-imposed ones, and increasing file sizes due to higher-fidelity images, the rate of our imaging archive growth is increasing. And sooner or later, that storage will need to be expanded. We can increase available capacity but — and this can be true for clustered-storage solution — the new nodes that we’d like to add to the array are not backwards-compatible with the older nodes, requiring a forklift upgrade of the entire archive. Or maybe maintenance has kicked in on that array that we bought four years ago. We can buy a brand new one that is faster, cheaper than carrying maintenance on the existing array. In either event, there would be a migration to do and those are really painful. On the 150 terabytes that we had to migrate, it took us ten months. So the looming migration of 55 terabytes, just on mammography, or even a petabyte scares us a little bit.

So we started looking at if there’s a better way of managing this increase in file growth versus the classic buy, expand, retire, buy, migrate, expand, retire, migrate again cycle of storage management that we found ourselves in. So I’m just going to cover three solutions that we’ve been evaluating.

Alternative one is the managed VNA. You can choose to outsource your entire archive. Companies like Dell provide the hardware, software, and services to run your imaging archive. This solution typically consists of an array on prem replicating to a similar array at an offsite location. There’s also software-defined storage. You provide the disk shelves and install software that will turn those shelves into a scale-out storage archive. But because you control the hardware, you decide when a forklift upgrade is needed. Because of the flexibility provided by these software solutions, you are able to upgrade your hardware without taking the cluster offline. And finally, it’s cloud storage. We find that these solutions provide the lowest amount of management overhead but also require more homework, if you will. But storing in the cloud means no more image migrations, just pay for the additional storage when you need it.

Typically, cloud offerings from Amazon [and?] Microsoft have a number of charges you need to be familiar with, so you have an idea of what your monthly bill will look like. There are ingress charges, egress charges, charges to leave the platform entirely. What we really liked about Nasuni is it simplifies all of that by masking those charges from us, the end user. We pay Nasuni. Nasuni has the relationship with Microsoft. So we know, annually, what our storage spend will be, at a much lower cost per usable terabyte than most array vendors.

But before moving into production, we needed to insure that we were meeting the performance requirements that our radiologists had set out. And so we grabbed a stopwatch, to insure that we could get a time to first image in under three seconds. I performed over 3,000 time trials and then compared this to the incumbent.

What we found from the on-premise cache in the appliance is that image retrievals performed 21% faster. That’s the time to first image. And then, from a complete study, we found the on-premise cache performed 7% faster than the incumbent.

When we moved out to the cloud, to pull images back from the cloud… This was over an 80‑megabit internet link. Nasuni allows you to restrict how much bandwidth you’re using of that link. So we opened up the pipe to see how quickly we could retrieve an image from the cloud. And what we have found is that, on the time to first image, it was 49% slower than our three-second baseline and a complete study, from the cloud, downloaded just 8.5% slower than local storage.

Today we store all of our mammography studies, both 2‑D and 3‑D, directly to Nasuni. We’ve started with mammography because the workflow is such that the radiologists do not see the impact inherent in retrieving stuff over an 80‑meg internet connection. We’ve got a DICOM router, that pulls these comparison cases down in the middle of the night and then routes them to their workstation. So they don’t see the performance delays that we have found in our trial. And as I said, this is all of our mammography. Both 2‑D and 3‑D are now stored solely in the cloud.

We’re not done yet. An appliance like Nasuni makes it simple to store medical images in the cloud. And with the cost savings we’re seeing, the business people think it’s worth continuing to move stuff toward the cloud. But we still have those performance concerns from the radiologists. So how can we reduce that delay? There are networking solutions available, allowing you to create Layer 2 connections directly to cloud storage provider data centers. But those have associated monthly cost. So we’ve been looking at VNAs and storage VNAs, that bring the flexibility of software-defined storage. And so, combined with cloud, you can create an archive that has less management overhead, greater flexibility than your typical storage array.

So one of the architectures that we have been evaluating would have images ingested into a VNA. The write will write currently to an on-premise array, as well as out to the Nasuni appliance. Nasuni appliance would evacuate that stuff to the cloud. And typically, while most organizations would run a secondary array in their data center, the nice thing about the cloud is that it becomes your DR for all of your images, so you don’t need to keep that secondary array, in your secondary data center anymore. (coughs) Excuse me. Should you lose your primary array, the VNA is smart enough to use Nasuni to bring that stuff back from the cloud. (pause) Excuse me. I had to get some water.

So there are some cost considerations to take into account. When looking to replace your array with cloud, make sure you understand the different expenses that will make up your overall spend. There is the cost for the on-premise cache and the gateway appliance. If you forego a service approach, there will be associated costs from your cloud provider. There’s potential telecom cost or connection cost for a direct link to a cloud provider — both of wh‑‑ of which can be forego with a VNA solution that front-ends that gateway appliance but then the VNA will have its own acquisition and maintenance charges associated with it.


It looks like we have another poll here. And the question is, “Do you have a NAS refresh or replacement project budgeted for FY 16?” and “If so, what is your timeline? — 3 months, 6 months, 12 months, 18 months,” or no — none projected. (pause) Let’s give a few more seconds for those results to trickle in, before we pick back up. (long pause) And we can see the results there.

Todd Thomas:

That’s about what I would expect.

Well, in summary, file sizes are continuing to increase, as modalities provide higher resolution. Storage as a service is a viable model for long-term study retention, hybrid deployment even more so. Chose your cloud partner carefully. They are all not made the same. And insure that that partner has compliance with HIPAA, HITECH, and Omnibus. Thank you.

Warren Mead:

Great. And this is Warren Mead, with Nasuni. And I’m going to give just a brief overview of the Nasuni solution that Todd and ARA have been using over the past year to store their digital images. And, Todd, thanks a lot for the overview and the kind words.

We are starting to see a great deal of momentum, particularly in the healthcare vertical. And it’s all around the need to store, back up, recover digital imaging and digital image files, like the 3‑D mammography that Todd had referenced. That file growth is expanding dramatically. And also the ability to get access to those files from anywhere and any device. As ARA said, they had 17 locations, supporting 20 hospitals. The ability to share some of those images in real time and the need for collaboration is also important, as well as not only the ability to get faster retention but also to reduce costs from the traditional on‑prem storage has also been a big driver for us.

And similar to the ARA slide that you saw, with their file growth, we’re seeing this across all industries, but particularly in healthcare, the uncontrollable growth. And as that scales, and as Todd talked about, the ability to buy, expand, migrate, renew, we see a lot of that and pain in that, as well as cost in that. Data protection is also a concern. And people want to look at that too. How am I going to back those files up? What about disaster recovery? And as always, the ability to have complete access to those files, from any system, from any device, from any location, that becomes very critical as well. And we’ve seen that with ARA, as well as other customers.

We talk about how our cloud NAS solution, powered with Azure… Azure gives our customers the ability to have a backup-less and bottomless, scalable solution. So as the file data grows, Azure can grow with that seamlessly. So it becomes very efficient in dealing with lots of growth, unlimited amounts of files and unlimited amounts of directories. And that’s the real power of Azure combined with Nasuni. And you’re not going to give up performance. So you’re still going to get that high-performance NAS solution but at a lower price point. The ability to have infinite snapshots geo-replicated across Azure and multiple Azure locations provides for a great backup and disaster recovery solution. So the combination of Nasuni and Azure is a great way to reduce cost from backup and DR. And a lot of that gives you the value with geo-redundancy. And that’s what Azure brings to the table. So there’s a lot of value there, when we look at the Nasuni solution combined with the cloud that Azure provides.

And also, the ability to access those files anywhere. We’re seeing that in all of the verticals but in particular in healthcare, where workforces are extremely mobile, they’re global, and they want to be able to share files across locations around the world and around the country and around their different geographic areas. So the ability to offer a global file system, from any location and any device, that’s centrally managed makes it very easy to manage those files centrally, as well as control those files. Central management in the cloud, with Azure, with distributed locations and sharing of those files, becomes very powerful for our customers. And we’re seeing that in the healthcare vertical.

We have some contacts on who to reach out to at Nasuni, if there is interest. There’s also a report, a cloud storage report. Nasuni benchmarks public clouds every year and delivers those results and publishes them. And Azure is by far the top-performing cloud. And that’s why we partner with Microsoft and leverage Azure. And you can see all the storage benchmarks, the reads, the writes, the errors, and how well Azure performs. And we really believe that the combination of Nasuni and Microsoft Azure can really help address a lot of your digital-image storage needs in the healthcare vertical. There’s two links you can go to, nasuni.com/healthcare, to understand exactly what we’re doing in your vertical, also nasuni.com/microsoft, to see how we are tightly integrated and leveraging the power of Azure to deliver a complete digital-imaging solution. And at this point, I’d like to turn it back over to Mike — from him. And we really appreciate him giving us the time to present, both us and with Austin Radiology.


Thank you, very much, Warren, Jeff, and Todd, for a great presentation. We do have time for some questions. And I see we already have one. So let’s get started. What sort…? And I think this is aimed at you, Todd, but maybe others can weigh in as well. What type of feedback have you had from physicians using the system? You know, were there any challenges in learning the new technology?

Todd Thomas:

Well, the nice thing is is that, aside from the radiologists that sit on our IT Subcommittee and are kind of aware of the infrastructure, once we deployed the solution, the radiologists really saw no impact to their workflow, from a — from a performance standpoint or really anything. How those images delivered to the PACS system worked as they — as they always did. From a — from a technical standpoint, the PACS software writes to the Nasuni platform as if it were a disk shelf or a disk array that we have on site. But the nice thing is is that the way the Nasuni appliance operates — that it eventually takes those images and then pushes them out in the cloud, leaving behind a — I’m going to use the term stub file. Nasuni may have a different moniker for that. It leaves behind a stub file. So the PACS application’s really unaware of where that image has moved to. Nasuni takes care of all of that. So from the radiologist’s perspective, from the end-user perspective, they really didn’t see an impact to their workflow.


Very good. Someone else is wondering is Nasuni tied to a particular VNA or does it act as a DICOM device? How does a PACS or VNA connect to Nasuni.

Warren Mead:

Yeah. So, good question. No, we’re not tied to a particular VNA. So if you think of Nasuni, think of it as — almost as a local caching device or, as Todd used — the gateway term. But think of it as a local caching device, that can sit at all the different locations and it’s centrally tied into Azure. And those local devices can be physical or virtual devices, depending on your organization and your infrastructure and what you have. So physical or virtual is fine. In most cases, we can migrate those digital images off of — if they’re stored… In Todd’s case, I believe it was an EMC Isilon solution and a clustered file. And we’re able to then store that locally at Nasuni. What we’ll do is we, at the local device level, we will encrypt, compress, de‑dup any of those images before sending them up to Azure. That all happens behind the cu‑‑ behind the customer’s firewall — so at ARA is behind the firewall, where they’ll hold the encryption keys, for an extra additional level of security there too.


All right. As we’re waiting for maybe another question or two to roll in, anybody have anything they’d like to, you know, revisit or reemphasize from what they were talking about?

Todd Thomas:

Just want to clarify. And, Warren, correct me if I’m wrong here. But de‑duping is an option that you can turn off, correct? Because I don’t know that’s —

Warren Mead:


Todd Thomas:

yeah. de‑duping any of our medical images.

Warren Mead:

Yeah. Different verticals, de‑dup becomes pretty valuable and powerful.

Todd Thomas:

Yeah, absolutely.

Warren Mead:

And in other verticals… So, yeah, you have the ability to turn that on or off, absolutely.


Do customers have to worry about loss of fidelity on image files? Anyone care to field that one?

Todd Thomas:

We don’t see a loss of fidelity. The compression algorithms we use are set by the application and so we are — we’ve turned off most of the — those features that you will see in an appliance like de‑dup or additional compression. We turn those things off. So Nasuni is really just moving that file from the on‑prem cache out to the cloud, where it just — where it just kind of sits. There’s really nothing done to it, either going out to the cloud or coming back from the cloud.


One more question. Does Nasuni have an SLA agreement for uptime? Or does that fall back to the cloud vendor?

Warren Mead:

Yeah. So the answer there is we do. So Nasuni absolutely does have a — have an SLA, in all of our agreements. And then we also, with our, you know, partnership with Microsoft too — some of that element is also driven by leveraging that Azure public cloud too. But you’ll absolutely have a service-level agreement on both ends of that offering.


Well, thanks, once again, to all the presenters, and thanks, of course, to the audience. And please to take part in the exit survey that will be appearing on your screen. And just a reminder that you’ll receive an email in the coming days with a link to the replay of this webinar for you to watch again or share with a colleague. So thanks again to everyone and have a great day.