Navigating a Cyber-Attack: Three Critical Workstreams in Disaster Recovery
A common misconception is that after cyber attacks, organisations start with technical activities like restoring servers. The reality, though, is quite different.
Written by: Conor Scolard, Technical Director at Ekco
A common misconception I’ve found about dealing with a cyber-attack is the belief that after disaster strikes, organisations simply initiate their Business Continuity Plans (BCPs) and start with the technical activities such as restoring servers. The reality, though, is quite different. Not only do you have to deal with the chaos of the moment, but you also have to navigate three different workstreams to steer the journey of recovery – executive decision making, security incident response, and technical recovery.
Watch our webinar on Ransomware Recovery
1. Executive Decision-making: Legalities and Reputation Management
At the onset of a disaster, your C-level will find themselves in a maelstrom of legalities and reputation management. Their first challenge, after reporting the attack to mandatory authorities, is deciding if you should disclose the breach, and, if so, how and to whom that information should be communicated. This is often based on whether data has been leaked or not.
Questions about immediate communication plans, the involvement of a public relations team, and the potential damage to brand reputation take centre stage. Compliance with legal obligations, including data protection regulations like the GDPR are crucial considerations here. Typically, the executives will follow the advice they get from a legal or a security advisor, be it from the incident response organisation or the insurance provider.
2. Security Incident Response: Unravelling the Threads
Simultaneously, the security incident response workstream kicks into gear when an attack becomes apparent; cyber insurance requirements are usually important here, if you have cover in place. Your security team, sometimes with a security partner like Ekco, then dives into a massive data gathering effort. This involves analysing timestamps and actions taken during the incident, as well as preserving critical logs.
I can’t emphasise enough how important it is for your organisation to maintain a log history of at least 90 days. This would include active directory logs, audit logs from RDS servers, and databases containing IP addresses or sensitive data, particularly those subject to the GDPR and other regulations. And make sure you store these logs in an immutable repository separate from your primary system. This precaution is crucial, as the first thing the security team will ask for after an incident hits is those logs. Delays in retrieving them will stall the decision-making processes of both the Executive and the Technical Recovery
Once the security team has analysed the logs, they put together a timeline, which may progress to a forensic analysis, where you’ll then need images of specific devices. For instance, if an employee’s laptop is implicated in an unusual RDP connection or malicious SMB traffic, imaging becomes imperative. This process can extend to your broader infrastructure, including servers that may hold artifacts from the attack.
Read our Disaster Recovery Brochure
3. Technical Recovery: Rebuilding the Foundations
As the security investigation progresses (which can take weeks, or even months), the technical recovery workstream comes into play. This is what people usually think of when they think about disaster recovery.
Data recovery efforts can often be hindered by insufficient free space. For example, you may be running at 80% capacity right now, which is manageable for everyday operations. But if there’s an incident, the security team will need you to keep all the encrypted servers because they’ll need to image them. This means you simply won’t have enough space to restore your machines. So, I would strongly recommend ensuring you have extra space for this kind of event, or have a third-party provider like Ekco that can give you that kind of capacity to perform your recovery.
And of course, there’s the question of what you should recover first. Your disaster recovery plan should include a list of the tier one services and applications you think you need to restore for the business to get back up and running. Remember to include payroll here, given that it can take 21 days on average for the security workflow to come to a close, which will definitely include a payroll cycle and if the incident has not been made public yet then not paying people is a good way to make it public very quickly.
Password management is also key, if you think about how much of your business, both physically and virtually, relies on passwords. Avoid putting your password vault on your server, as it will most likely be encrypted. Keep it separate, so your people can work as you get up and running. Resetting credentials is one of the first tasks you’ll be addressing, once you’ve restored basic internet connectivity and started rebuilding a rudimentary network.
Here’s the slide deck from our Ransomware Recovery webinar
A Holistic Approach to Resilience
In the wake of a disaster, organisations will find themselves navigating the complex interplay of executive decisions, security investigations, and technical recovery efforts. It’s important your company is aware of these three workstreams and includes them as part of your DR plan; finding a way forward after an attack is about so much more than just infrastructure considerations.
Need to plan for the worst-case scenario? We know how. Get in touch.
Question?
Our specialists have the answer