Absolutely. Used to work at a small MSP. Got ultra unlucky in that we got chosen as the rest case target for a zero day that leveraged our Remote Support tools so our own systems and all of our client systems that were online got hit with ransomware in a very short time frame.
Some clients had local backups to Synology boxes and those worked ok thankfully. However all the rest had backups based on Hyper-V. The other local copy was on a second windows server that also got hit so the local copies didn’t help. They did also have a remote copy which wasn’t encrypted.
So all good right? Just pull the remote backup copy and apply that… Yea every time we had ever used the service before had either been single servers that physically died and took disks along on the death or just file level restores.
Those all worked fine. Still sounds like not a problem right? Nope. We found both that a couple of the larger servers had backups that didn’t actually have everything in spite of being VM images. No idea how their software even was able to do that.
And the worse part was that their data transfer rate was insanely slow. About 10mbps. Not that per server or par client. Nope that was the max export rate across everything. It would have taken literally months to restore everything at that rate.
I hate to say it but yes we did in fact pay the ransom and the. Had to fight for several days going through getting things decrypted. Then going through months of reinstalling fresh copies and/or putting in new servers. Also changing our entire stack at the same time. Shockingly we handled it well enough we lost no clients. Largely because we were able to prove we couldn’t have known ahead of time.
If you read through all that I’ll even say the vendors name. It was StorageCraft. I now have a deep hate for them.
Also one more is that with the old Apple HFS+ filesystem based time machine backups it would sometimes report as a valid self checked backup even if it had corruption. It would do this as long as some self check confirmed that it could fix the corruption during a restore. However if you tried directly browsing through the time machine backups it would have files that couldn’t be read, unless again you did a full system restore with it.
Nearly lost my wife’s semester ending before finding it worked that way.
I can’t confirm it but seems it is fully fixed with APFS and might be one of the reasons they spent the effort to make that transition.
Absolutely. Used to work at a small MSP. Got ultra unlucky in that we got chosen as the rest case target for a zero day that leveraged our Remote Support tools so our own systems and all of our client systems that were online got hit with ransomware in a very short time frame.
Some clients had local backups to Synology boxes and those worked ok thankfully. However all the rest had backups based on Hyper-V. The other local copy was on a second windows server that also got hit so the local copies didn’t help. They did also have a remote copy which wasn’t encrypted.
So all good right? Just pull the remote backup copy and apply that… Yea every time we had ever used the service before had either been single servers that physically died and took disks along on the death or just file level restores.
Those all worked fine. Still sounds like not a problem right? Nope. We found both that a couple of the larger servers had backups that didn’t actually have everything in spite of being VM images. No idea how their software even was able to do that.
And the worse part was that their data transfer rate was insanely slow. About 10mbps. Not that per server or par client. Nope that was the max export rate across everything. It would have taken literally months to restore everything at that rate.
I hate to say it but yes we did in fact pay the ransom and the. Had to fight for several days going through getting things decrypted. Then going through months of reinstalling fresh copies and/or putting in new servers. Also changing our entire stack at the same time. Shockingly we handled it well enough we lost no clients. Largely because we were able to prove we couldn’t have known ahead of time.
If you read through all that I’ll even say the vendors name. It was StorageCraft. I now have a deep hate for them.
Also one more is that with the old Apple HFS+ filesystem based time machine backups it would sometimes report as a valid self checked backup even if it had corruption. It would do this as long as some self check confirmed that it could fix the corruption during a restore. However if you tried directly browsing through the time machine backups it would have files that couldn’t be read, unless again you did a full system restore with it.
Nearly lost my wife’s semester ending before finding it worked that way.
I can’t confirm it but seems it is fully fixed with APFS and might be one of the reasons they spent the effort to make that transition.