Threat information sharing: why you can do what’s right

It was an honour and pleasure to speak today at the Canadian SecuR&E Forum, a research and education community-building event event hosted by CANARIE. My object was to spread the gospel of threat information sharing and debunk some myths about legal privilege as a barrier to it. Here are my slides, and I’ve also included the text of my address below.

Slide one

I am here today as a representative of my profession – the legal profession.

I’m an incident response lawyer or so-called “breach coach.” Lawyers like me are often used in an advisory capacity on major cyber incidents. Insurers encourage this. They feel we add consistency of approach mitigate downside risk.

I’ve done some very difficult and rewarding things with IT leaders in responding to incidents, and genuinely believe in the value of using an incident response lawyer. But I am also aware of a discomfort with the lawyer’s role, and the discomfort is typically expressed in relation to the topic of threat information sharing.

We often hear organizations say, “The lawyer told us not to share.”

I’m here as a lawyer who is an ally to IT leadership, and to reinforce the very premise of CanSSOC – that no single institution can tackle cybersecurity issues alone.

Here’s my five-part argument in favour of threat information sharing:

  • Organizations must communicate to manage
  • The art is in communicating well
  • Working within a zone of privilege is important
  • But privilege does not protect fact
  • And threat information is fact

My plan is to walk you through this argument, taking a little detour along the way to teach you about the concept of privilege.

Slide 2

Let’s first define what we are talking about – define “threat information.”

NIST is the National Institute for Standards and Technology, an agency of the US Department of Commerce whose cybersecurity framework is something many of your institutions use.

NIST says threat information is, “Any information related to a threat that might help an organization protect itself against a threat or detect the activities of an actor.”

Indicators (of compromise) are pieces of evidence that indicate a network has been attacked: traffic from malicious IP addresses and malware signatures, for example.

“TTPs” are threat actor “tactics, techniques and procedures.” These are behaviours, processes, actions, and strategies used by a threat actor. Of course, if one knows threat actor measures, one can employ countermeasures.

Beyond indicators and TTPs, we have more contextualized information about an incident, information that connects the pieces together and helps give it meaning. It all fits within this definition, however.

Slide 3

Argument 1 – we must communicate to manage

Let’s start with the object of incident response. Sure we want to contain and eradicate quickly. Sure we want to restore services as fast as possible. Without making light of it, I’ll say that there is lots of “drama” associated with most major cyber incidents today,

Major incidents are visible, high stakes affairs in which reputation and relationships are at stake. You’ll have many, many stakeholders descending on you from time zero, and every one of them wants one thing – information. You don’t have a lot of that to give them, in the early days at least, but you’ve got to give them what you can.

In other words, you need to do the right thing and be seen to do the right thing. This means being clear about what’s happened and what you’re doing about it. It means reporting to law enforcement. And it means sharing threat information with peers.

We’re stronger together is the CanSSOC tag line, and it’s bang on. NIST says that Tier 4 or “adaptive” organizations – the most mature in its framework – understand their part in the cyber ecosystem share threat information with external collaborators. There’s no debate: sharing threat information is part of a widely accepted cybersecurity standard.

Slide 4

Argument 2 – the art is in communicating well

People have a broad right to remain silent under our law.

And anything they say can be used as evidence against them in a court of law.

These are plain truths that are taught to lawyers first year constitutional and criminal law classes across the country.

And the right to remain silent ought to be to be adhered to strictly in some scenarios – when one faces criminal jeopardy, for example

Incident scenarios are far, far from that.

The most realistic downside scenario in most incidents is getting sued.

In theory, you can avoid civil liability by not being transparent about your bad facts.

In reality, hiding your bad facts is almost always an unwise approach.

This is because bad facts will come out:

  • because you’ll notify individuals affected by a privacy breach in accordance with norms or because it’s legally required; or
  • because you’re a public body subject to FOI legislation.

So you’ve got to do what the communications pros say: get ahead of it the issue, control the message and communicate well.

Slide 5

Let’s detour from the argument for a moment to do some important background learning.

What is legal privilege?

Short answer – It is a very helpful tool for incident responders.

It’s a helpful tool because it shields communications from pretty much everyone. Adversaries in litigation are the main concern, but also the public – who, again, has a presumptive right of access to every record in the custody or control of a university.

There are two types of privilege.

Solicitor-client. This is the strongest form of privilege. You see the definition here. Invoking privilege is not as simple as copying your lawyer on a communication. But if you send a communication to a lawyer and your decision-making team at the same time, and your lawyer is a legal advisor to the team, the communication is privileged.

Litigation privilege works a little differently, and is quite important. I specify in engagement letters that my engagement is both as an advisor and “in contemplation of litigation” so reports produced by the investigators we hire are more likely to survive a privilege challenge.

Invoking privilege is why you want to call your incident response counsel at the outset. If the investigator comes in first, you can always have a late-arriving lawyer say that the investigation is now for their purpose and in contemplation of litigation, but that assertion could be questioned given the timing. In other words, the investigation will look operational and routine and not for the very special purposes that support a privilege claim.

Slide 6

Back to the argument

Argument 3 – Working within a zone of privilege is important

Here’s an illustration of the power of privilege and why you want to establish it.

The left-hand column is within the zone of privilege. I’m in that zone. The experts I retain for you are in that zone. And you’re in that zone along with other key decision-makers. We keep the team small so our confidential communication is more secure.

And we can speak freely within the zone. Have a look at the nuanced situation set out in the left-hand column. The forensic investigator can present evidence gathered over hours and hours of work in one clear and cogent report. We can deal with fine points about what that evidence may or may not prove and what you ought to do about it. I’ll tell you where you can and should go, but I’ll also tell you about the frailties in those directions and other options you shouldn’t and won’t take.

None of that need ever see the light of day, and in the right-hand column, in public, you can tell your story in the clearest, plainest and most favorable way possible: “We do not believe there has been any unauthorized access to student and employee personal information.” If plaintiff counsel or anyone else wishes to disprove that, they can’t go to your forensic report for a road map to the evidence and for something to mine for facts that might seal your fate in court. They must gather all the evidence gathered by your investigator themselves, re-do the analysis and then figure out on their own what it means.

Privilege is of powerful benefit.

Slide 7

Argument 4 – privilege doesn’t protect facts

I often hear, “We need to keep things confidential because of privilege.” Let me tell you what that means.

The privilege belongs to the client, not the lawyer. Clients can waive privilege, so they need to keep their privileged communications and documents confidential. Institutions do this all the time, but it’s risky to say, “We’re doing this because our lawyer said so.” That’s arguably an implicit waiver.

The easy rule is, “Don’t publish anything you’ve said to your lawyer or that your lawyer has said to you.” Don’t state it directly. Don’t even hint at it!

The same goes for your forensic investigator. Saying “Our forensic investigator told us this.” is also a risk. Just say that you’ve done your investigation, and these are the facts, or you that you believe this to be the case.

If you do that. If you talk about the facts, you won’t waive privilege. You’ll be using the privilege to derive the facts you publish, and will be safe.

This is what your lawyer is working so hard on in an incident. One of our main roles is to work within that zone of privilege on the evidence and to determine what is and isn’t fact. If it really is fact, and you are in transparency mode, you will get the fact out whether it’s a good fact or a bad fact. And I’ll agonize with you about what that right hand column should say and make sure it is safe. I’ll ask myself continuously, “If my client gets into a fight later, will that be what is ultimately proven to be the truth?”

Slide 8

Argument 5 – threat information is fact

It is. And if you can convey facts without waiving privilege, you can convey threat information without waiving privilege.

So don’t listen to anyone that tells you that you can’t share threat information because it will waive privilege. It’s not a valid argument.

You’ll have a very clear view of indicators of compromise fairly early into an incident and should share them immediately because their value is time limited.

It takes longer to identify TTPs, but they are safe to share too because they are factual.

That’s my argument. I’ve been talking tough, but will end with a qualification – a qualification and a challenge!

The qualification. You should be wary of the unstructured sharing of information with context, particularly early on in an incident: CISOs call CISOs, Presidents call Presidents, I understand. I get it, and think that the risk of oral conversations with trusted individuals can be low. Nonetheless, this kind of informal sharing is not visible, and does represent a risk that is unknown and unmanaged. I’d rather you bring it into the formal incident response process and do it right. For example, I was part of an incident last year in which CanSSOC took an unprecedented and and creative step in brining together two universities who were simultaneously under attack by the same threat actor so they could compare notes.

This is the, challenge, then: how do we – IT, leaders, lawyers and CANSSOC together – enable better sharing in a safe manner. There’s a real opportunity to lead the nation on this point, and I welcome it.

Court says access parent’s right to information limited by children’s privacy rights

On October 12th of last year the Ontario Superior Court of Justice considered the interplay between an access parent’s right to information under section 20(5) of the Children’s Law Reform Act and the privacy rights granted by Personal Health Information Protection Act. It held that the right to information is qualified by a child’s best interest, and a privacy right claimed by a child with capacity under PHIPA is a relevant factor.

Section 20(5) of the CLRA says:

The entitlement to parenting time with respect to a child includes the right to visit with and be visited by the child, and includes the same right as a parent to make inquiries and to be given information about the child’s well-being, including in relation to the child’s health and education.

The Court addressed a motion brought by a father for access to his children’s health and counselling files. He had sought access under PHIPA and was denied because the children – both deemed to have capacity – withheld their consent. The father brought a motion in Family Court, relying both on Section 20(5) and seeking production of third-party records under the Family Law Rules, arguing the records were relevant to his claims of parental alienation and other parenting issues to be determined by the Court.

The Court read section 20(5) together with section 28(8), a new provision of the CLRA that qualifies the right information as being “subject to any applicable laws.” It said:

This new statutory reference to a Court being able to “order otherwise” is a specific reminder that the right in 20(5) is not absolute.  Internally, the right must be interpreted through the lens of the best interest principle, as all decisions affecting children are:  see again section 19(a) of the Children’s Law Reform Act; see 24(1); and see also Children’s Lawyer for Ontario v. Ontario (Information and Privacy Commissioner), 2018 ONCA 559 ¶58-61.  

The new, statutory subjugation of the right in section 20(5) externally “to any applicable laws” codifies what was already happening, namely that courts should consider the operation of other laws, like the PHIPAwhen considering the scope of the right.  Another example of another “applicable law” that can interact with the right in section 20(5) would be the common law of privilege:  see M.(A.) v. Ryan, 1997 CanLII 403 (SCC)[1997] 1 S.C.R. 157.

The reference to “subjugation” is somewhat misleading given the Court affirmed its power to make an order under the CLRA based on the best interests principle and affirmed that such an order would bind health information custodians despite PHIPA. Section 20(5) is only subjugated to PHIPA in that PHIPA rights are a factor (and arguably a strong factor) in the best interests analysis.

On the facts, the Court held there was no basis for an order under section 20(5) but there was a basis for a limited production order (based on fairness considerations) under the Family Law Rules.

L.S. v. B.S., 2022 ONSC 5796 (CanLII).

Three key issues from the Ontario cyber security Expert Panel report

On October 3rd, the Ontario’s cyber security Expert Panel issued its report to Minister of Public and Business Service Delivery, Kaleed Rasheed.

His Honour said, “The Expert Panel’s recommendations will form the foundation of our cyber security policies and help develop best practices shared across all sectors as well as inform future targeted investments in our cyber capabilities and defences.”

Those recommendations are:

  1. Regarding governance: Ontario should reinforce existing governance structures to enable effective cyber security risk management across the BPS.
  2. Regarding education and training: Ontario should continue to develop diverse and inclusive cyber security awareness and training initiatives across all age-levels of learning, supported by a variety of common and tailored content and hands-on activities.
  3. Regarding communication: Ontario should implement a framework that encourages BPS entities to share information related to cyber security securely amongst each other with ease.
  4. Regarding shared services: Ontario should continue to develop, improve, and expand shared services and contracts for cyber resiliency across the BPS, considering sector-specific needs where required.

Here are three issues of significance to public sector instutions and their insurers.

FIRST, the governance recommendation contemplates more government oversight, including through “a single oversight body, employing a common operating model [and] clearly establishing accountabilities.”

Institutions require more funding to address cyber security risks. This recommendation is positive because it will lay the necessary groundwork.

As suggested by the Expert Panel, the current relationship between government and institutions is somewhat confused. Government is engaged an informal kind of oversight that lacks effectiveness and can rightly put institutions on guard because its measures are unclear. Institutions will benefit from clear and simple accountabilities and – did I say it already? – the funding to meet those accountabilities.

SECOND, the communication recommendation encompasses threat information sharing, with the Expert Panel stating, “Ontario should establish a unified critical information sharing protocol to ensure quick communication of cyber incidents, threat intelligence, and vulnerabilities amongst BPS organizations.”

This is to rectify what the Expert Panel says is the “unidirectional” flow of threat information, which is reported to government but is not yet “broadly shared across the BPS.” Institutions know that government currently craves the early reporting of threat information, but the perceived benefit is still minimal. The Expert Panel recommendation is positive in that it may lead to their receipt of more timely, more enriched threat information.

THIRD, the shared services recommendation addresses the cyber insurance coverage problem now faced by the public sector. The expert panel states:

Ontario should investigate options for establishing a self-funded cyber insurance program to support the delivery of services such as breach coaching, incident response, and recovery to which all BPS organizations can subscribe.

There is a form of self-funded cyber coverage available various parts of the Ontario public sector through insurance reciprocals. This coverage is expanding, and the role of reciprocals is becoming more important now that the insurance market has become so hard. Primary coverage by reciprocals, even if limited in scope, can make secondary coverage more obtainable for public sector institutions.

The “breach coaching” reference above gives me pause, though I understand it to be indicative of how the role of expert legal counsel in incident response was borne out of the cyber insurance market (with the term coined by cyber risk and insurance company NetDiligence, I believe).

Breach coaching is simply expert legal advice by another name. It is funded by cyber insurance for those who have coverage, and insurers have required their insureds to use vetted and approved legal advisors in responding to incidents because they understand the risk mitigating (and cost reducing) value of this specialized legal service. Public sector institutions without coverage bear all the same risks as those with coverage, and without proper advice are at great peril. The need for proper legal advice one reason is why it is so important to solve the public sector coverage problem, though institutions dealing with a major cyber incident should not consider legal advice to be optional.

Recent cyber presentations

Teaching is the best way of learning for some, including me. Here are two recent cyber security presentations that may be of interest:

  • A presentation from last month on “the law of information” that I delivered to participants in the the Osgoode PDP program on cyber security
  • Last week’s presentation for school boards – Critical Issues in School Board Cyber Security

If you have questions please get in touch!

ABCA decision on defending allegations about privileged communication

On April 12th, the Court of Appeal of Alberta held that a defendant waived solicitor-client privilege by affirmatively pleading that its counsel had no instructions to agree to a time extension for filing a prospectus.

The defendant faced a lawsuit that alleged its counsel gave a time extension and had the actual authority to do so. The majority judges explained that a party faced with such an allegation about a privileged communication can make a bald denial and safely rest on its privilege. The defendant went further, thereby putting its privileged communications in issue.

PetroFrontier Corp v Macquarie Capital Markets Canada Ltd, 2022 ABCA 136 (CanLII).

Where’s that workplace surveillance bill? More thoughts pending its release

It’s Friday at 4:20pm and I don’t see an Ontario workplace surveillance bill yet, so here are a couple more thoughts – one positive, one negative and one neutral.

Positive – Organizations ought to employ “information technology asset management” – a process for governing their network hardware and software. Those organizations with strong asset management practices will have little difficulty identifying how employees are “monitored.” For those who are weak asset managers, the new bill is an invitation to improvement and rooting out unmanaged applications.

Negative – As I said yesterday, the devil will be in the detail, and the scope of the “monitoring” that is regulated will be key. Monitoring must be defined in a way that does not affect non-routine processes – i.e., audits and investigations. Those raise a different kind of privacy concern, and a notification requirement shouldn’t frustrate an organization’s ability to investigate.

Neutral – Organizations typically keep security controls confidential to protect against behavior we call “threat shifting” – the shifting of tactics to circumvent existing, known controls. I’m doubtful the type of disclosure the bill will require will create a security risk, but it’s an issue to consider when we see the text.

Bring on the bill!

Cyber class action claims at an inflection point

Yesterday, I happily gave a good news presentation on cyber claims legal developments to an audience of insurance defence lawyers and professionals at the Canadian Insurance Claims Managers Association – Canadian Independent Adjusters’ Association – Canadian Defence Lawyers joint session.

It was good news because we’ve had some recent case law developments create legal constraints on pursuing various common claims scenarios, namely:

  • The lost computer, bag or other physical receptacle scenario – always most benign, with notification alone unlikely to give rise to compensable harm, a trial judgement looking positively at a one year credit monitoring offer and proof of causation of actual fraud a long shot at best
  • The malicious outsider scenario – for the time being looking like it will not give rise to moral damages that flow from an intentional wrong (though this will be the subject of an Court of Appeal for Ontario hearing soon in Owsianik)
  • The malicious insider scenario – partly addressed by a rather assertive Justice Perell finding in Thompson

We’re far from done yet, but as I say in the slides below, we’re at the early stages of an inflection point. I also give my cynical and protective practical advice – given the provable harms in the above scenarios flow mainly from the act of notification itself, notify based on a very strong analysis of the facts and evidence; never notify because there’s a speculative risk of unauthorized access or theft​. Never a bad point to stress.

The union right of access to information

I’ve done a fair deal of enjoyable work on matters relating to a union’s right of access to information – be it under labour law, health and safety law (via union member participation in the health and safety internal responsibility system) or via freedom of information law. Today I had the pleasure of co-presenting to the International Municipal Lawyers Association on the labour law right of access with my colleague from the City of Vaughan, Meghan Ferguson.

Our presentation was about how the labour law right has fared against employee privacy claims. In short, it has fared very well, and arguably better in Ontario than in British Columbia.

I don’t believe the dialogue between labour and management is over yet, however, especially as unions push for greater access at the same time privacy sensitivities are on the rise. The advent of made-in-Ontario privacy legislation could be an impetus for a change, not because it is likely to provide employees with statutory privacy rights as much as because the new legislation could apply directly to unions. So stay tuned, and in the interim please enjoy the slides below.

The current state of FOI

Here is a deck I just put together for the The Osgoode Certificate in Privacy & Cybersecurity Law that gives a high-level perspective on the state of FOI, in particular given (a) the free flow of information that can eviscerate practical obscurity and (b) the serious cyber threat that’s facing our public institutions. As I said in the webinar itself, I’m so pleased that Osgoode PDP has integrated an FOI unit into into its privacy and cyber program given it is such a driver of core “information law.”

For related content see this short paper, Threat Exchanges and Freedom of Information Legislation, 2019 CanLIIDocs 3716. And here’s a blog post from the archives that with some good principled discussion that I refer to – Principles endorsed in Arar secrecy decision.

Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students

I’ve had a long time interest in threat assessment and its application by educational institutions in managing the risk of catastrophic physical violence, though it has been a good ten years since the major advances in Canadian institutional policy. Here is a pointer to a journal article about an apparent new United States trend – automated monitoring of online and social media posts for threat assessment purposes.

Author Amy B. Cyphert starts with an illustrative scenario that’s worth quoting in full:

In 2011, a seventeen–year–old named Mishka,1 angry that his friends had recently been jumped in a fight, penned a Facebook post full of violence, including saying that his high school was “asking for a [expletive] shooting, or something.” Friends saw the post and alerted school officials, who contacted the police. By the time psychologist Dr. John Van Dreal, who ran the Safety and Risk Management Program for Mishka’s Oregon public school system, arrived, Mishka was in handcuffs.4 Mishka and his classmates were lucky: their school system employed a risk management program, and Dr. Van Dreal was able to help talk with Mishka about what caused him to write the post. Realizing that Mishka had no intention of harming anyone, Dr. Van Dreal helped Mishka avoid being charged with a criminal offense. Dr. Van Dreal also arranged for him to attend a smaller school, where he found mentors, graduated on time, and is today a twenty–five–year–old working for a security firm.

Had Mishka’s story happened today, just eight short years later, it might have looked very different. First, instead of his friends noticing his troubled Facebook post and alerting his school, it might have been flagged by a machine learning algorithm developed by a software company that Mishka’s school paid
tens of thousands of dollars to per year. Although Mishka’s post was clearly alarming and made obvious mention of possible violence, a post flagged by the algorithm might be seemingly innocuous and yet still contain terms or features that the algorithm had determined are statistically correlated with a higher likelihood of violence. An alert would be sent to school officials, though the algorithm would not necessarily explain what features about the post triggered it. Dr. Van Dreal and the risk management program? They might have been cut in order to pay for the third-party monitoring conducted by the software company. A school official would be left to decide whether Mishka’s post warranted some form of school discipline, or even a referral to the authorities.

Cyphert raises good questions about the problem of bias associated with algorithmic identification and about the impact of monitoring and identification on student expression, privacy and equality rights.

My views are quite simple.

I set aside algorithmic bias as a fundamental concern because the baseline (traditional threat assessment) is not devoid of its own problems of bias; technology could, at least in theory, lead to more fair and accurate assessments.

I also put my main concern on the matter of efficacy. Nobody disputes that schools and higher education institutions should passively receive threat reports from community members. My questions. Has the accepted form of surveillance failed? What is the risk passive surveillance will fail? How will it fail? To what degree? Does that risk call for a more aggressive, active monitoring solution? Is there an active monitoring solution that is likely to be effective, accounting concerns about bias?

If active internet monitoring cannot be shown to be reasonably necessary, however serious the problem of catastrophic physical violence, I question whether it can be either legally justifiable or required in order to meet the standard of care. Canadian schools and institutions who adopt new threat surveillance technology because it may be of benefit, without asking the critical questions above may invite a new standard of care with tenuous underpinnings.

Cyphert, Amy B. (2020) “Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students,” Nevada Law Journal: Vol. 20 : Iss. 2 , Article 4.
Available at: https://scholars.law.unlv.edu/nlj/vol20/iss2/4