Catching Up with Our CTO: The Evolution of Ransomware and the Perils of Centralized Backup

Nasuni built our file system so that it wouldn’t need centralized backup and would instead self-protect against threats like ransomware.

October 9, 2020

10/8/2020: This blog was originally published on 4/9/2020, and has been updated to provide information on our upcoming, related webinar.

You might think that the world has enough to deal with already. But the spike in ransomware attacks on businesses struggling to cope with the coronavirus pandemic is particularly unsettling. Interpol is reporting more ransomware attacks on hospitals and other organizations say attackers are taking advantage of the remote-work chaos. To make matters worse, today’s ransomware attacks can be far more damaging and widespread than the ones of just a few years ago.

A sophisticated ransomware breach is now a distributed disaster, one that it impacts all your offices almost simultaneously. Relying on a traditional backup solution is no longer a sound strategy, as these systems may require weeks to recover files. Even if they can recover a single site in a reasonable amount of time, many of them require sequential restores, so you’ll have to prioritize which sites to bring back up first.  On November 18, we will be hosting a webinar with  file storage -security experts who will explain five reasons traditional backup isn’t up to the challenge of ransomware.

Like all evil genius hackers, the devious minds behind ransomware have accelerated its evolution. The latest ransomware variants infect a client, spread through the network, and rapidly encrypt everything they can touch. Earlier this year, when we were all still traveling, I visited with a large multinational AEC firm that had suffered from one of these new ransomware variants. The company was running best-in-class enterprise backup. They were following best practices. They’d educated their end users, secured their firewalls, and done everything by the book.

None of this helped.

When the company was hit, the malware immediately spread through its network, encrypting the file servers at hundreds of sites. Within two hours, IT had responded and shut everything down, but the damage was already done.

What Happens Next is Scary

This scenario is more common than anyone cares to admit. But it’s the next phase, the recovery, that reveals a fundamental flaw in centralized backup. If we assume a large global enterprise backs up four times a day, then their Recovery Point Objective (RPO) is roughly 6 hours. After an attack, they should expect to be able to restore these previous versions, before the data had been encrypted, within a reasonable time frame. Yes, their end-users would lose half a day’s work, but they can still expect their Recovery Time Objective (RTO) to be a couple of hours, maybe a day if it takes a while to get the servers rebuilt through the WAN.

But it’s not that simple. Enterprise backup systems don’t merely dump files into data centers. A central media server dedupes and compresses data to ensure optimal use of storage capacity. These operations demand intense compute resources, but the systems have at least three to six hours, so there is enough time. If a single site were to suffer from a ransomware attack, the media server could rehydrate that compressed data and IT could reasonably get the users at that site back up and operational within a business day, or less. When there are just a few sites in the picture, this centralized backup approach can work perfectly fine.

The problem arises when you try this for dozens or hundreds of sites. The compressed, deduped data from each site has to be rehydrated before being put back in the servers. This is compute intensive and the central backup server only has the capacity to manage a handful of locations at a time. So if you have 100 sites that need to be restored, you are in serious trouble. The RTO can jump from a couple of hours to a couple of hundreds of hours. IT has to rank the sites in order of priority, and they all have to get in line and wait. Unbeknownst to anyone, the media server has become a treacherous bottleneck.

An enterprise-class centralized backup system is like a giant octopus with hundreds of tentacles. The compressed, deduped, protected data is stored in the head. Each location is at the end of a tentacle, and the octopus can only repopulate one tentacle at a time. The RTO might be a day or two for a critical location. But as we scale up to dozens or hundreds of sites, other locations will have to wait for weeks or even months, and IT will be completely overworked and strained to the breaking point.

Ransomware Recovery with Cloud File Storage

I have never liked backup, especially when it comes to big file servers. Even under the best of circumstances, the traditional process is too slow and unreliable. It tends to fail in unexpected ways. The ransomware horror story above is just one more example. This is why we built UniFS® to be an infinite, immutable, versioned file system. UniFS makes it possible for companies to recover swiftly from distributed ransomware attacks regardless of the scale of the attack. There is no bottleneck in UniFS restores.

We didn’t build our file system to respond to ransomware. We built it so that it would self-protect and not need an additional multi-step backup. After a ransomware attack, UniFS resets to a previous point in time – giving organizations an RPO of minutes, not hours – getting them back up and running in a distributed way in a few hours or less, across 100s of sites. This is the massive difference between our approach and centralized backup: We do not become slower when you have more sites. We scale RTO to the scale of your organization.

The webinar on November 18th will be a great opportunity to learn more about these issues in greater technical detail – from the reasons traditional backup is insufficient to the advantages of the modern cloud approach. I encourage you to sign up, and arm yourself with more ransomware knowledge.

One final thought. Nasuni recently celebrated its 11th anniversary. Years ago, I used to say to our customers, almost as a joke, that they could lose every one of the Nasuni appliances at every site and their files would still be fine. Their files and file system live in the cloud and the cloud is indestructible. It was a joke because the scenarios we are going through today – globally distributed ransomware attacks, the pandemic shutdown, etc. – were unimaginable. But I’m very relieved we chose to design our technology this way.

Nothing can absolutely prevent a ransomware attack, and the insidious malware certainly isn’t going away. Nasuni gives its clients a means of mitigating the damage, reducing the recovery time for globally impacted enterprises from weeks to minutes.

Again, that’s weeks to minutes.

For a deeper dive on the features organizations should look for in a ransomware recovery solution, check out our latest whitepaper.

Ready to dive deeper into a new approach to data infrastructure?