In praise of cyber response transparency (and in defence of the “breach coach”)

Wired Magazine published an article last week about school cyber attacks in the United States that was wholly denigrating of the role of cyber incident response counsel – “breach coaches.” Wired’s theme was that schools are using their lawyers to deprive parents, students, and the public of information. Wired has inspired this post, though I will say little more about it than “Don’t believe everything you read.” Rather, I will be positive, and explain that transparency is at the center of good cyber incident response and that breach counsel enable transparency through clear, accurate, and timely communication.

We must communicate to manage

Let us start with the object of incident response. Sure, we want to contain and eradicate quickly. Sure, we want to restore services as fast as possible. But without making light of it, I will say that there is lots of “drama” associated with most major cyber incidents today that renders incident response about more than containment and eradication.

Major incidents are visible, high stakes affairs in which reputation and relationships are at stake. You will have many stakeholders descending on you from time zero, and every one of them wants one thing – information. You do not have a lot of that to give them, in the early days at least, but you have got to give them what you can.

In other words, you need to do the right thing and be seen to do the right thing. This means being clear about what has happened and what you are doing about it. It means reporting to law enforcement. It means sharing threat information with peers. It means getting your message out.

This is crises communication best practice. If smoke is billowing from your house and the public has questions, the public will make its own story about your fire if you say nothing, and any obfuscation risks blowback. You must, as best you can, get your message out.

Let’s get privilege straight

Lawyers love privilege, but we may not do a good enough job at helping the public understand why it is so important, and why it is not inimical to transparency.

There are two types of privilege.

Solicitor-client privilege is the strongest form of privilege. A confidential communication between lawyer and client that relates to the giving or receiving of legal advice (in the broadest sense) is privileged.

Litigation privilege works a little differently, and is quite important for giving a person who is in litigation or who contemplates litigation a “zone of privacy” in which to prepare, strategize and plan free from an adversary’s scrutiny.

Privilege is a powerful tool for organizations because it shields communications from everyone – an adversary in litigation, a freedom of information requester, a regulator.

This is for good reason: privilege allows for good legal advice on complicated, high stakes problems. If litigation is pending or anticipated, it also allows for the adversaries to be adversaries, which contributes to the truth seeking function of adjudication. Privileged is hallowed, and recognized by our courts as central to rule of law.

Privilege, though, applies to communications, not to facts that have an independent existence. Is your head exploding yet? Let me explain this tricky idea.

Say an incident leads to the discovery of four facts – Fact A, Fact B, Fact C and Fact D. There is a question about whether those four facts prove data exfiltration of a particular set of data. Lawyer and client can communicate with each other to develop an understanding of that legal question. The lawyer can advise the client about what the evidence means, whether inferences can be drawn, and how the evidence is likely to be interpreted by a judge or a regulator. The lawyer may give an answer – “no exfiltration” – but also explain the strengths and weaknesses of taking that position. All the evaluation and advice – the communication – is privileged, but Fact A, Fact B, Fact C, and Fact D are not. In incident response, those facts are normally embodied in the forensic artifacts collected and preserved by the forensic investigator or collected in communications with the threat actor(s). Those artefacts and communications are producible in litigation and producible to a regulator, which allows others to examine them, engage in analysis (that may replicate the analysis that occurred under privilege), and draw their own conclusions. What an adversary or regulator cannot do is piggyback on solicitor-client communications to understand how the lawyer and client viewed all the nuance of the issue.

This is an important point to understand because it answers some unfounded concerns that privilege is a tool of obfuscation. It is not.

Privilege must be respected, though. There’s a now famous case in Canada in which an organization attempted to claim that recorded dialog with a threat actor was privileged because the communication was conveyed to counsel by an expert retained by counsel. The Court rightly held this was over reach. The threat actor dialog itself is fact. The same goes for forensic timelines. They are privileged because they are recorded in privileged reports. In litigation, this does help put some burden on an adversary to analyze the evidence themselves and develop their own timeline. But it’s unwise to tell a regulator, “I’m not giving you a timeline because the only place its recorded is in my privileged report.” Withhold the precise framing of the timeline in your report. Keep any conclusory elements, evaluations, and qualifications confidential, too. But give the regulator the facts. That’s all they want, and it should spare you a pointless privilege dispute.

From the zone of privilege to the public

I explain to our incident response clients that we work with them in a zone of privacy or privilege that is a safe communication zone. It is like a staging area for evidence, where we can sit with evidence, understand it, and determine what is and is not fact. The picture of an incident is formed slowly over time based on investigation. Things that seem the case are often not the case, and assumptions are to be relied upon cautiously.

It is our role, as counsel, to advise the client on what is safe to treat as fact. Once fact, it can be pushed out of the zone of privilege to the public in communications. It is at this point the communication will live on the public record and be used as evidence, so we carefully vet all such communication by asking four questions:

  • Is there any speculation? Are all facts accurately described? Are all facts clearly described?
  • Are there commitments/promises? Are they achievable?
  • Does the communication accurately convey the risk? If it raises alarm or encourages action, is that justified? Or will we cause stress for no good reason?
  • Does the communication reveal anything said under privilege (which can waive privilege)?

Our duty is to our client, and our filter is to protect our client, but it also benefits the public because it ensures that incident communications are clear and reliable. This is hard work, and the heavy scrutiny that always comes later can reveal weaknesses in word choice, even. But by and whole, organizations with qualified incident response counsel achieve transparency and engender stakeholder and public understanding and confidence.

Good notification takes time

Any organization whose network is compromised can contain the incident, and then an hour later announce, “If you have ever been employed with us or been a client of our your information may have been stolen.” This will almost always be a true statement, but it’s also a meaningless and vast over notification. Good legal counsel lead their clients to investigate.

Investigation takes time. Determining what has been taken, if anything, is the first step. If you do that well, it can take about a week. But that is only the start. Imagine looking at a 453,000 file listing, delivered to you by a threat actor without any file path metadata. The question: who is affected? Your file share is encrypted, so you do not even have readable copies of the files yet.

Is it any wonder that organizations notify weeks and months after they are attacked? You cannot rightly blame the lawyers or their clients for this. It is hard work. If an organization elects to spend six figures and four months on e-discovery to conduct file level analysis, it will be able to send a letter to each affected individual that sets out a tailored list of exposed data elements. Our regulator in Ontario has called this “the standard,” at the same time opening the door to more generalized notifications. We are moving now to population based notifications, while still trying to be meaningful. Consider the following:

All individuals who received service x between date 1 and date 2 are affected. The contact information of all such individuals has been exposed (phone, e-mail and address as provided). About a third of the individuals in this population provided an emergency contact. The identity of this person and their phone number was also exposed.

I am explaining this because time to notify is the easiest thing on which to criticize an organization. Time to notification visible, and far easier to understand than it is to explain to mas audiences with the kind of descriptions I have made here. Believe me, though, it is a very demanding challenge on which incident response counsel spend significant time and energy with their clients and data processing vendors, all with the aim of giving earlier and meaningful notifications.

Conclusion

Cyber incident response counsel are essential for effective and transparent incident management. They facilitate clear communication, crucial for stakeholder confidence and the fulfilment of obligations. Privilege, often misunderstood, enables open lawyer-client communication, improving decision-making. It’s not a tool to hide facts. Counsel guide clients through investigations and notifications, ensuring accuracy and avoiding speculation. Notification delays often stem from the complex process of determining breach scope and identifying affected individuals. Counsel help balance speed and quality of notification, serving their clients first, but also the public.

Perspectives on anonymization report released

On December 18, Khaled El Emam, Anita Fineberg, Elizabeth Jonker and Lisa Pilgram published Perspectives of Canadian privacy regulators on anonymization practices and anonymization information: a qualitative study. It is based on input from all but one Canadian privacy regulator, and includes a great discussion of one of the most important policy issues in Canadian privacy law – What do we do about anonymization given the massive demand for artificial intelligence training data?

The authors stress a lack of precision and consistency in Canadian law. True that the fine parameters of Canadian privacy law are yet to be articulated, but the broad parameters of our policy are presently clear:

  • First, there must be authorization to de-identify personal information. The Canadian regulators who the authors spoke with were mostly aligned against a consent requirement, though not without qualification. If there’s no express authorization to de-identify without consent (as in Ontario PHIPA), one gets the impression that a regulator will not imply consent to de-identify data for all purposes and all manner of de-dentification.
  • Second, custodians of personal information must be transparent. One regulator said, “I have no sympathy for the point of view that it’s better not to tell people so as not to create any noise. I do not believe that that’s an acceptable public policy stance.” So, if you’re going to sell a patient’s health data to a commercial entity, okay, but you better let patients know.
  • Third, the information must be de-identified in a manner that renders the re-identification risk very low in the context. Lots can be said about the risk threshold and the manner of de-identification, and lots that will be said over the next while. The authors recommend that legislators adopt a “code of practice” model for establishing specific requirements for de-dentification.

The above requirements can all be derived from existing legislation, as is illustrated well by PHIPA Decision 175 in Ontario, about a custodian’s sale of anonymized personal health information. Notably, the IPC imposed a requirement on the disclosing custodian to govern the recipient entity by way of the data sale agreement, rooting its jurisdiction in the provision that requires safeguarding of personal health information a custodian’s control. One can question this root, though it is tied to re-identification risk and within jurisdiction in my view.

What’s not in current Canadian privacy legislation is any restriction on the purpose of de-dentification, the identity of recipients, or the nature of the recipient’s secondary use. This is a BIG issue that is tied to data ethics. Should a health care provider ever be able to sell its data to an entity for commercial use? Should custodians be responsible for determining whether the secondary use is likely to harm individuals or groups – e.g., based on the application of algorithmic bias?

Bill C-27 (the PIPEDA replacement bill) permits the non-consensual disclosure of de-identified personal information to specific entities for a “socially beneficial purpose” – “a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.” Given C-27 looks fated to die, Alberta’s Bill 33 may lead the way, and if passed will restrict Alberta public bodies from disclosing “non-personal information” outside of government for any purpose other than “research and analysis” and “planning, administering, delivering, managing, monitoring or evaluating a program or service” (leaving AI model developers wondering how far they can stretch the concept of “research”).

Both C-27 and Bill 33 impose a contracting requirement akin to that imposed by the IPC in Decision 175. Bill 33, for example, only permits disclosure outside of government if:

(ii) the head of the public body has approved conditions relating to the following: (A) security and confidentiality; (B) the prohibition of any actual or attempted re-identification of the non-personal data; (C) the prohibition of any subsequent use or disclosure of the non-personal data without the express authorization of the public body; (D) the destruction of the non-personal data at the earliest reasonable time after it has served its purpose under subclause (i), unless the public body has given the express authorization referred to in paragraph (C),

and

(iii) the person has signed an agreement to comply with the approved conditions, this Act, the regulations and any of the public body’s policies and procedures
relating to non-personal data.

Far be it from me to solve this complex policy problem, but here are my thoughts:

  • Let’s aim for express authorization to-de identify rather than continuing to rely on a warped concept of implied consent. Express authorization best promotes transparency and predictability.
  • I’m quite comfortable with a generally stated re-identification risk threshold, and wary of a binding organizations to a detailed and inaccessible code of practice.
  • Any foray into establishing ethical or other requirements for “research” should respect academic freedom, and have an appropriate exclusion.
  • We need to eliminate downstream accountability for de-identified data of the kind that is invited by the Bill 33 provision quoted above. Custodians don’t have the practical ability to enforce these agreements, and the agreements will therefore invite huge potential liability. Statutes should bind recipients and immunize organizations who disclose de-identified information for a valid purpose from downstream liability.

Do have a read of the report, and keep thinking and talking about these important issues.