In praise of cyber response transparency (and in defence of the “breach coach”)

Wired Magazine published an article last week about school cyber attacks in the United States that was wholly denigrating of the role of cyber incident response counsel – “breach coaches.” Wired’s theme was that schools are using their lawyers to deprive parents, students, and the public of information. Wired has inspired this post, though I will say little more about it than “Don’t believe everything you read.” Rather, I will be positive, and explain that transparency is at the center of good cyber incident response and that breach counsel enable transparency through clear, accurate, and timely communication.

We must communicate to manage

Let us start with the object of incident response. Sure, we want to contain and eradicate quickly. Sure, we want to restore services as fast as possible. But without making light of it, I will say that there is lots of “drama” associated with most major cyber incidents today that renders incident response about more than containment and eradication.

Major incidents are visible, high stakes affairs in which reputation and relationships are at stake. You will have many stakeholders descending on you from time zero, and every one of them wants one thing – information. You do not have a lot of that to give them, in the early days at least, but you have got to give them what you can.

In other words, you need to do the right thing and be seen to do the right thing. This means being clear about what has happened and what you are doing about it. It means reporting to law enforcement. It means sharing threat information with peers. It means getting your message out.

This is crises communication best practice. If smoke is billowing from your house and the public has questions, the public will make its own story about your fire if you say nothing, and any obfuscation risks blowback. You must, as best you can, get your message out.

Let’s get privilege straight

Lawyers love privilege, but we may not do a good enough job at helping the public understand why it is so important, and why it is not inimical to transparency.

There are two types of privilege.

Solicitor-client privilege is the strongest form of privilege. A confidential communication between lawyer and client that relates to the giving or receiving of legal advice (in the broadest sense) is privileged.

Litigation privilege works a little differently, and is quite important for giving a person who is in litigation or who contemplates litigation a “zone of privacy” in which to prepare, strategize and plan free from an adversary’s scrutiny.

Privilege is a powerful tool for organizations because it shields communications from everyone – an adversary in litigation, a freedom of information requester, a regulator.

This is for good reason: privilege allows for good legal advice on complicated, high stakes problems. If litigation is pending or anticipated, it also allows for the adversaries to be adversaries, which contributes to the truth seeking function of adjudication. Privileged is hallowed, and recognized by our courts as central to rule of law.

Privilege, though, applies to communications, not to facts that have an independent existence. Is your head exploding yet? Let me explain this tricky idea.

Say an incident leads to the discovery of four facts – Fact A, Fact B, Fact C and Fact D. There is a question about whether those four facts prove data exfiltration of a particular set of data. Lawyer and client can communicate with each other to develop an understanding of that legal question. The lawyer can advise the client about what the evidence means, whether inferences can be drawn, and how the evidence is likely to be interpreted by a judge or a regulator. The lawyer may give an answer – “no exfiltration” – but also explain the strengths and weaknesses of taking that position. All the evaluation and advice – the communication – is privileged, but Fact A, Fact B, Fact C, and Fact D are not. In incident response, those facts are normally embodied in the forensic artifacts collected and preserved by the forensic investigator or collected in communications with the threat actor(s). Those artefacts and communications are producible in litigation and producible to a regulator, which allows others to examine them, engage in analysis (that may replicate the analysis that occurred under privilege), and draw their own conclusions. What an adversary or regulator cannot do is piggyback on solicitor-client communications to understand how the lawyer and client viewed all the nuance of the issue.

This is an important point to understand because it answers some unfounded concerns that privilege is a tool of obfuscation. It is not.

Privilege must be respected, though. There’s a now famous case in Canada in which an organization attempted to claim that recorded dialog with a threat actor was privileged because the communication was conveyed to counsel by an expert retained by counsel. The Court rightly held this was over reach. The threat actor dialog itself is fact. The same goes for forensic timelines. They are privileged because they are recorded in privileged reports. In litigation, this does help put some burden on an adversary to analyze the evidence themselves and develop their own timeline. But it’s unwise to tell a regulator, “I’m not giving you a timeline because the only place its recorded is in my privileged report.” Withhold the precise framing of the timeline in your report. Keep any conclusory elements, evaluations, and qualifications confidential, too. But give the regulator the facts. That’s all they want, and it should spare you a pointless privilege dispute.

From the zone of privilege to the public

I explain to our incident response clients that we work with them in a zone of privacy or privilege that is a safe communication zone. It is like a staging area for evidence, where we can sit with evidence, understand it, and determine what is and is not fact. The picture of an incident is formed slowly over time based on investigation. Things that seem the case are often not the case, and assumptions are to be relied upon cautiously.

It is our role, as counsel, to advise the client on what is safe to treat as fact. Once fact, it can be pushed out of the zone of privilege to the public in communications. It is at this point the communication will live on the public record and be used as evidence, so we carefully vet all such communication by asking four questions:

  • Is there any speculation? Are all facts accurately described? Are all facts clearly described?
  • Are there commitments/promises? Are they achievable?
  • Does the communication accurately convey the risk? If it raises alarm or encourages action, is that justified? Or will we cause stress for no good reason?
  • Does the communication reveal anything said under privilege (which can waive privilege)?

Our duty is to our client, and our filter is to protect our client, but it also benefits the public because it ensures that incident communications are clear and reliable. This is hard work, and the heavy scrutiny that always comes later can reveal weaknesses in word choice, even. But by and whole, organizations with qualified incident response counsel achieve transparency and engender stakeholder and public understanding and confidence.

Good notification takes time

Any organization whose network is compromised can contain the incident, and then an hour later announce, “If you have ever been employed with us or been a client of our your information may have been stolen.” This will almost always be a true statement, but it’s also a meaningless and vast over notification. Good legal counsel lead their clients to investigate.

Investigation takes time. Determining what has been taken, if anything, is the first step. If you do that well, it can take about a week. But that is only the start. Imagine looking at a 453,000 file listing, delivered to you by a threat actor without any file path metadata. The question: who is affected? Your file share is encrypted, so you do not even have readable copies of the files yet.

Is it any wonder that organizations notify weeks and months after they are attacked? You cannot rightly blame the lawyers or their clients for this. It is hard work. If an organization elects to spend six figures and four months on e-discovery to conduct file level analysis, it will be able to send a letter to each affected individual that sets out a tailored list of exposed data elements. Our regulator in Ontario has called this “the standard,” at the same time opening the door to more generalized notifications. We are moving now to population based notifications, while still trying to be meaningful. Consider the following:

All individuals who received service x between date 1 and date 2 are affected. The contact information of all such individuals has been exposed (phone, e-mail and address as provided). About a third of the individuals in this population provided an emergency contact. The identity of this person and their phone number was also exposed.

I am explaining this because time to notify is the easiest thing on which to criticize an organization. Time to notification visible, and far easier to understand than it is to explain to mas audiences with the kind of descriptions I have made here. Believe me, though, it is a very demanding challenge on which incident response counsel spend significant time and energy with their clients and data processing vendors, all with the aim of giving earlier and meaningful notifications.

Conclusion

Cyber incident response counsel are essential for effective and transparent incident management. They facilitate clear communication, crucial for stakeholder confidence and the fulfilment of obligations. Privilege, often misunderstood, enables open lawyer-client communication, improving decision-making. It’s not a tool to hide facts. Counsel guide clients through investigations and notifications, ensuring accuracy and avoiding speculation. Notification delays often stem from the complex process of determining breach scope and identifying affected individuals. Counsel help balance speed and quality of notification, serving their clients first, but also the public.

Threat information sharing: why you can do what’s right

It was an honour and pleasure to speak today at the Canadian SecuR&E Forum, a research and education community-building event event hosted by CANARIE. My object was to spread the gospel of threat information sharing and debunk some myths about legal privilege as a barrier to it. Here are my slides, and I’ve also included the text of my address below.

Slide one

I am here today as a representative of my profession – the legal profession.

I’m an incident response lawyer or so-called “breach coach.” Lawyers like me are often used in an advisory capacity on major cyber incidents. Insurers encourage this. They feel we add consistency of approach mitigate downside risk.

I’ve done some very difficult and rewarding things with IT leaders in responding to incidents, and genuinely believe in the value of using an incident response lawyer. But I am also aware of a discomfort with the lawyer’s role, and the discomfort is typically expressed in relation to the topic of threat information sharing.

We often hear organizations say, “The lawyer told us not to share.”

I’m here as a lawyer who is an ally to IT leadership, and to reinforce the very premise of CanSSOC – that no single institution can tackle cybersecurity issues alone.

Here’s my five-part argument in favour of threat information sharing:

  • Organizations must communicate to manage
  • The art is in communicating well
  • Working within a zone of privilege is important
  • But privilege does not protect fact
  • And threat information is fact

My plan is to walk you through this argument, taking a little detour along the way to teach you about the concept of privilege.

Slide 2

Let’s first define what we are talking about – define “threat information.”

NIST is the National Institute for Standards and Technology, an agency of the US Department of Commerce whose cybersecurity framework is something many of your institutions use.

NIST says threat information is, “Any information related to a threat that might help an organization protect itself against a threat or detect the activities of an actor.”

Indicators (of compromise) are pieces of evidence that indicate a network has been attacked: traffic from malicious IP addresses and malware signatures, for example.

“TTPs” are threat actor “tactics, techniques and procedures.” These are behaviours, processes, actions, and strategies used by a threat actor. Of course, if one knows threat actor measures, one can employ countermeasures.

Beyond indicators and TTPs, we have more contextualized information about an incident, information that connects the pieces together and helps give it meaning. It all fits within this definition, however.

Slide 3

Argument 1 – we must communicate to manage

Let’s start with the object of incident response. Sure we want to contain and eradicate quickly. Sure we want to restore services as fast as possible. Without making light of it, I’ll say that there is lots of “drama” associated with most major cyber incidents today,

Major incidents are visible, high stakes affairs in which reputation and relationships are at stake. You’ll have many, many stakeholders descending on you from time zero, and every one of them wants one thing – information. You don’t have a lot of that to give them, in the early days at least, but you’ve got to give them what you can.

In other words, you need to do the right thing and be seen to do the right thing. This means being clear about what’s happened and what you’re doing about it. It means reporting to law enforcement. And it means sharing threat information with peers.

We’re stronger together is the CanSSOC tag line, and it’s bang on. NIST says that Tier 4 or “adaptive” organizations – the most mature in its framework – understand their part in the cyber ecosystem share threat information with external collaborators. There’s no debate: sharing threat information is part of a widely accepted cybersecurity standard.

Slide 4

Argument 2 – the art is in communicating well

People have a broad right to remain silent under our law.

And anything they say can be used as evidence against them in a court of law.

These are plain truths that are taught to lawyers first year constitutional and criminal law classes across the country.

And the right to remain silent ought to be to be adhered to strictly in some scenarios – when one faces criminal jeopardy, for example

Incident scenarios are far, far from that.

The most realistic downside scenario in most incidents is getting sued.

In theory, you can avoid civil liability by not being transparent about your bad facts.

In reality, hiding your bad facts is almost always an unwise approach.

This is because bad facts will come out:

  • because you’ll notify individuals affected by a privacy breach in accordance with norms or because it’s legally required; or
  • because you’re a public body subject to FOI legislation.

So you’ve got to do what the communications pros say: get ahead of it the issue, control the message and communicate well.

Slide 5

Let’s detour from the argument for a moment to do some important background learning.

What is legal privilege?

Short answer – It is a very helpful tool for incident responders.

It’s a helpful tool because it shields communications from pretty much everyone. Adversaries in litigation are the main concern, but also the public – who, again, has a presumptive right of access to every record in the custody or control of a university.

There are two types of privilege.

Solicitor-client. This is the strongest form of privilege. You see the definition here. Invoking privilege is not as simple as copying your lawyer on a communication. But if you send a communication to a lawyer and your decision-making team at the same time, and your lawyer is a legal advisor to the team, the communication is privileged.

Litigation privilege works a little differently, and is quite important. I specify in engagement letters that my engagement is both as an advisor and “in contemplation of litigation” so reports produced by the investigators we hire are more likely to survive a privilege challenge.

Invoking privilege is why you want to call your incident response counsel at the outset. If the investigator comes in first, you can always have a late-arriving lawyer say that the investigation is now for their purpose and in contemplation of litigation, but that assertion could be questioned given the timing. In other words, the investigation will look operational and routine and not for the very special purposes that support a privilege claim.

Slide 6

Back to the argument

Argument 3 – Working within a zone of privilege is important

Here’s an illustration of the power of privilege and why you want to establish it.

The left-hand column is within the zone of privilege. I’m in that zone. The experts I retain for you are in that zone. And you’re in that zone along with other key decision-makers. We keep the team small so our confidential communication is more secure.

And we can speak freely within the zone. Have a look at the nuanced situation set out in the left-hand column. The forensic investigator can present evidence gathered over hours and hours of work in one clear and cogent report. We can deal with fine points about what that evidence may or may not prove and what you ought to do about it. I’ll tell you where you can and should go, but I’ll also tell you about the frailties in those directions and other options you shouldn’t and won’t take.

None of that need ever see the light of day, and in the right-hand column, in public, you can tell your story in the clearest, plainest and most favorable way possible: “We do not believe there has been any unauthorized access to student and employee personal information.” If plaintiff counsel or anyone else wishes to disprove that, they can’t go to your forensic report for a road map to the evidence and for something to mine for facts that might seal your fate in court. They must gather all the evidence gathered by your investigator themselves, re-do the analysis and then figure out on their own what it means.

Privilege is of powerful benefit.

Slide 7

Argument 4 – privilege doesn’t protect facts

I often hear, “We need to keep things confidential because of privilege.” Let me tell you what that means.

The privilege belongs to the client, not the lawyer. Clients can waive privilege, so they need to keep their privileged communications and documents confidential. Institutions do this all the time, but it’s risky to say, “We’re doing this because our lawyer said so.” That’s arguably an implicit waiver.

The easy rule is, “Don’t publish anything you’ve said to your lawyer or that your lawyer has said to you.” Don’t state it directly. Don’t even hint at it!

The same goes for your forensic investigator. Saying “Our forensic investigator told us this.” is also a risk. Just say that you’ve done your investigation, and these are the facts, or you that you believe this to be the case.

If you do that. If you talk about the facts, you won’t waive privilege. You’ll be using the privilege to derive the facts you publish, and will be safe.

This is what your lawyer is working so hard on in an incident. One of our main roles is to work within that zone of privilege on the evidence and to determine what is and isn’t fact. If it really is fact, and you are in transparency mode, you will get the fact out whether it’s a good fact or a bad fact. And I’ll agonize with you about what that right hand column should say and make sure it is safe. I’ll ask myself continuously, “If my client gets into a fight later, will that be what is ultimately proven to be the truth?”

Slide 8

Argument 5 – threat information is fact

It is. And if you can convey facts without waiving privilege, you can convey threat information without waiving privilege.

So don’t listen to anyone that tells you that you can’t share threat information because it will waive privilege. It’s not a valid argument.

You’ll have a very clear view of indicators of compromise fairly early into an incident and should share them immediately because their value is time limited.

It takes longer to identify TTPs, but they are safe to share too because they are factual.

That’s my argument. I’ve been talking tough, but will end with a qualification – a qualification and a challenge!

The qualification. You should be wary of the unstructured sharing of information with context, particularly early on in an incident: CISOs call CISOs, Presidents call Presidents, I understand. I get it, and think that the risk of oral conversations with trusted individuals can be low. Nonetheless, this kind of informal sharing is not visible, and does represent a risk that is unknown and unmanaged. I’d rather you bring it into the formal incident response process and do it right. For example, I was part of an incident last year in which CanSSOC took an unprecedented and and creative step in brining together two universities who were simultaneously under attack by the same threat actor so they could compare notes.

This is the, challenge, then: how do we – IT, leaders, lawyers and CANSSOC together – enable better sharing in a safe manner. There’s a real opportunity to lead the nation on this point, and I welcome it.

Cyber defence basics – Maritime Connections

I was pleased to do a cyber defence basics presentation to privacy professionals attending the Public Service Information Community Connection “Maritime Connections” event yesterday. The presentation (below) is based off of recent publications by the New York Department of Financial Services and the Information Commissioner’s Office (UK) as as the (significant) Coveware Q3 ransomware report.

As I said to the attendees, I am not a technical expert and no substitute for one, but those of us outside of IT and IT security who work in this space (along with the predominantly non-technical management teams we serve) must engage with the key technical concepts underpinning IT security if we are to succeed at cyber defence.

I’ll do an updated version next week at Saskatchewan Connections next week. Join us!

The role of legal counsel in ransomware response – cyber divergence on display

Two publications released earlier this month illustrate different views on how to structure ransomware response, and in particular on how to structure the involvement of legal counsel.

On Wednesday of last week, the Ontario Ministry of Government Services issued a bulletin entitled “What is Ransomware and How to Prevent Ransomware Attacks” to the broader public sector. It features a preparation and response playbook that will be much appreciated by the hospitals, universities, colleges, school boards and municipalities targeted by the MGS.

The playbook treats ransomware response as primarily a technical problem – i.e., a problem about restoration of IT services. Legal counsel is mentioned in a statement about incident preparation, but is assigned no role in the heart of the response process. Indeed, the MGS suggests that the Information and Privacy Commissioner/Ontario is the source of advice, even “early on” in an incident:

If you are unable to rule out whether or not PII was compromised (which will likely be the case early on in an incident), contact the Privacy Commissioner of Ontario (416) 326-3333.

Contrast this with what Coveware says in its very significant Q3 ransomware trends report that it released on November 4th. Coveware – arguably the best source of ransomware data – explains that data exfiltration threats now feature in 50% of ransomware incidents and that ransom payments are a poor (and becoming poorer) method of preventing threat actors from leaking what they take. Coveware says:

Accordingly, we strongly advise all victims of data exfiltration to take the hard, but responsible steps. Those include getting the advice of competent privacy attorneys, performing an investigation into what data was taken, and performing the necessary notifications that result from that investigation and counsel.  Paying a threat actor does not discharge any of the above, and given the outcomes that we have recently seen, paying a threat actor not to leak stolen data provides almost no benefit to the victim. There may be other reasons to consider, such as brand damage or longer term liability, and all considerations should be made before a strategy is set.

The Coveware view, shared by Canadian cyber-insurers, is that ransomware is primarily a legal and reputational problem, with significant downside legal risks for institutions who do not engage early with legal counsel.

I favor this latter view, and will say quite clearly that it is bad practice to call a privacy regulator about a potentially significant privacy problem before calling a privacy lawyer. A regulator is not an advisor in this context.

This is not a position I take out of self-interest, nor do I believe that lawyers should always be engaged to coordinate incident response. As I’ve argued, the routine use of lawyers as incident coordinators can create problems in claiming privilege when lawyer engagement truly is for the “dominant purpose of existing or anticipated litigation.” My point is that ransomware attacks, especially how they are trending, leave institutions in a legal minefield. Institutions – though they may not know it – have a deep need to involve trusted counsel from the very start.

DFS report shows how to double down on remote access security

On October 15th, the New York State Department of Financial Services issued a report on the June 2020 cybersecurity incident in which a 17-year old hacker his friends gained access to Twitter’s account management tools and hijacked over 100 accounts.

The report stresses the critical risk against which social media companies employ their security measures and the simplicity of the hacker’s methods. The DFS raises the link between social media account security and election security and also notes that the S&P500 lost $135.5 billion in value in 2013 when hackers tweeted false information from the Associated Press’s Twitter account. Despite this risk, the 2020 hackers gained access based on a well-executed but simple social engineering campaign, without the aide of malware, exploits or backdoors.

The hackers conducted intelligence. They impersonated the Twitter IT department and called employees to help with VPN problems, which were prevalent following Twitter’s shift to remote work. The hackers directed employees to a fake login page, which allowed them to capture credentials and circumvent multifactor authentication.

The event lasted about 24 hours. The DFS explains that Twitter employed a password re-set protocol that required every employee to attend a video conference with a supervisor and manually change their passwords.

The event and the report are about the remote workforce risk we face today. Twitter had all the components of a good defence in place, but according to the DFS it could have done better given the high consequences of a failure. Here is a summary of some of the DFS recommendations:

  • Employ stricter privilege limitations, with access being re-certified regularly. Following the incident Twitter did just this, even though it apparently slowed down some job functions.
  • While multifactor authentication is a given, the DFS noted, “Another possible control for high-risk functions is to require certification or approval by a second employee before the action can be taken.”
  • The DFS points out that not all multifactor authentication is created equal: “The most secure form of MFA is a physical security key, or hardware MFA, involving a USB key that is plugged into a computer to authenticate users.”
  • The DFS says organizations should establish uniform standards of communications and educate employees about them. Employees should know, for example, exactly how the organization will contact them about suspicious account activity.
  • The DFS endorses “robust” monitoring via security information and event management systems – monitoring in “near real-time.”

These recommendations could make for very strong remote access and account security, but are worth note.

Report on Investigation of Twitter’s July 15, 2020 Cybersecurity Incident and the Implications for Election Security.