Man CA – Police can identify driver of rental car via agency

On April 15th, the Court of Appeal for Manitoba held that an accused had no reasonable expectation of privacy in information that a rental car agency provided to the police without a warrant.

The police were investigating a fatal shooting. The shooter was in a rental car that belonged to a specific agency, they knew. When the police asked, the agency identified the co-accused as the renter and the accused as an authorized driver. It also provided their cell phone numbers, drivers license numbers and credit card numbers.

The Supreme Court of Canada decision in Spencer dictates that the PIPEDA allowance for volunteering information to the police does not vitiate one’s expectation of privacy for the purpose of Charter analysis. The Court of Appeal acknowledged this, and as in Spencer, it also held that contract language allowing for the disclosure of personal information as “required or permitted by law” was “of no real assistance.”

However, the Court of Appeal distinguished Spencer on other grounds. Its decision turns on the following key factors:

  • the rental agreement allowed the agency share information with law enforcement “to take action regarding illegal activities or violations of terms of service”
  • section 22 of the Manitoba Highway Traffic Act requires agencies to keep a registry of renters that is open to public inspection (even though the registry is to include “particular’s of the [renter’s] drivers license”)
  • the overall context – i.e., that driving is a highly regulated activity, with one’s identity as an operator of a vehicle being something that is widely known and ought to be widely knowable

Privacy advocates will take issue with the Court’s reliance on the rental agreement term, though the case does rest on two other significant factors, including a provision of Manitoba law that the accused did not challenge. On a quick look, I see that Saskatchewan has the same provision.

R v Telfer, 2021 MBCA 38 (CanLII).

Ontario BPS cyber expert panel raises alarm

Last autumn, the Ontario government struck an expert panel of cyber advisors. Among other things, it gave the panel a mandate to “assess and identify common and sector-specific cyber security themes and challenges encountered by Broader Public Sector (BPS) agencies and service delivery partners in Ontario.”

The panel got quickly to work, and in late 2020 gathered feedback from panel members and BPS stakeholders to produce an interim report under the name of its Chair, Robert Wong. The interim report is as unsurprising as it is alarming, speaking to wide-ranging maturity levels derived from under-resourcing as well as failures of governance. It includes characterizations of well-understood governance challenges in the university, school board and health care sectors. On universities, for example, the Chair reports:

Even in institutions with relatively strong and mature corporate governance practices, there are still significant challenges to effectively manage cyber security risks that result from competing priorities and inconsistent application of oversight and policies. For example, funding in higher education comes from various sources and is allocated based on various criteria. Some university research groups that have successfully secured grants or private sponsorship dollars often have a sense of entitlement and feel that because it is their money, they get to call the shots and ignore cyber security concerns when they procure technology tools. Why don’t universities impose the same cyber security requirements on their researchers as they do on other faculty and staff?

Notably, the Chair says, “A regional-based shared-services model may be the only viable option for the smaller players to be able to afford and gain access to the limited availability of technical expertise in the marketplace.”

He also makes the following two interim recommendations, one to government and another to BPS entities themselves:

1. That the National Institute of Standards and Technology (NIST) Cybersecurity Framework be endorsed by the Government of Ontario for the Broader Public Sector’s cyber security practices. If an entity has already adopted a cyber security framework other than that of NIST, the expectation is that they map the framework they are using to the NIST framework to ensure alignment and consistency. Understanding that BPS entities vary in size and risk-profile, it is reasonable to expect that the breadth and depth to which the NIST Cybersecurity Framework is implemented will also vary accordingly, following a risk-based approach. To assist small- and medium-sized organizations in adopting and implementing the NIST framework, the Canadian Centre for Cyber Security’s “Baseline Cyber Security Controls for Small and Medium Organizations” is a useful guide that provides the fundamental requirements for an effective cyber security practice that aligns with the NIST framework.

2. That all BPS entities implement a Cyber Security Education and Awareness Training Program. The content of the training materials shall be maintained to ensure currency of information. New employees shall receive the training immediately after joining the company as part of their orientation program, and all existing employees shall receive refresher training on an annual basis, at a minimum. Information Technology and cyber security specialists shall receive regular cyber security technical training to ensure their skills are kept current. Specialized educational materials may be developed that would be appropriate for boards of directors, senior executives and any other key decision-makers. Effective management of cyber security risks requires the efforts and commitment of everyone and cannot simply be delegated to the cyber security professionals. A strong “tone-at-the-top” is a critical success factor to strengthen the cyber security resilience of BPS service delivery partners.

The panel is not a standard setting entity, but the second recommendation does establish something to which BPS entities now ought to strive. Of course, this raises the question of resourcing. Minister Lisa Thompson’s response to the interim report suggests that the government’s assistance will be indirect, via the Cyber Security Centre of Excellence’s learning portal.

Cyber Risks and M&A Transactions

We have just posted all the content for our BLG series “Privacy & Cyber Risks, Trends & Opportunities for Business.” See here for some very good content by our privacy and data security team.

Here is a direct link to our most recent webinar, which I delivered together with my partner Patrice Martin. It was very rewarding to work with and learn from Patrice, a very well established technology industry and transactions lawyer.

Enjoy. Learn. Get in touch.

When it happens, will you be ready? How to excel in handling your next cyber incident

I like speaking about incident response because there are so many important practical points to convey. Every so often I re-consolidate my thinking on the topic and do up a new slide deck. Here is one such deck from this week’s presentation at Canadian Society of Association Executives Winter Summit. It includes an adjusted four step description of the response process that I’m content with.

We’ve been having some team discussions over here about how incident response plans can be horribly over-built and unusable. I made the point in presenting this that one could take the four step model asset out in this deck, add add a modest amount of “meat” to the process (starting with assigning responsibilities) and append some points on how specific scenarios might be handled based on simple discussion if not a bona fide tabletop exercise.

Preparing for a cyber incident isn’t and shouldn’t be hard, and simple guidance is often most useful for dealing with complex problems.

The current state of FOI

Here is a deck I just put together for the The Osgoode Certificate in Privacy & Cybersecurity Law that gives a high-level perspective on the state of FOI, in particular given (a) the free flow of information that can eviscerate practical obscurity and (b) the serious cyber threat that’s facing our public institutions. As I said in the webinar itself, I’m so pleased that Osgoode PDP has integrated an FOI unit into into its privacy and cyber program given it is such a driver of core “information law.”

For related content see this short paper, Threat Exchanges and Freedom of Information Legislation, 2019 CanLIIDocs 3716. And here’s a blog post from the archives that with some good principled discussion that I refer to – Principles endorsed in Arar secrecy decision.

Alberta OIPC finds Blackbaud incident gives rise to RROSH

Hat tip to my good colleague Francois Joli-Coeur, who let our group know yesterday that the OIPC Alberta has issued a number of breach notification decisions about the Blackbaud incident, finding in each one that it gave rise to a “real risk of significant harm” that warrants notification and reporting under Alberta PIPA.

Blackbaud is a cloud service provider to organizations engaged in fundraising who suffered a ransomware incident last spring in which hackers exfiltrated the personal information of donors and educational institution alumni. The true scope of the incident is unknown, but likely large, affecting millions of individuals across the globe.

Blackbaud issued notably strong communications that de-emphasized the risk of harm. It rested primarily on the payment of a ransom, assurances by the threat actors that they would delete all data in exchange for payment and its ongoing dark web searches. Most affected institutions (Blackbaud clients) notified anyway.

On my count the OIPC issued seven breach notification decisions about the incident late last year, each time finding a “real risk.” In a decision involving an American college with donors or alumni in Alberta, the OIPC said:

In my view, a reasonable person would consider the likelihood of significant harm resulting from this incident is increased because the personal information was compromised due to a deliberate unauthorized intrusion by a cybercriminal. The Organization reported that the cybercriminal both accessed and stole the personal information at issue. The Organization can only assume that cybercriminal did not or will not misuse, disseminate or otherwise make available publicly the personal information at issue.

This is not surprising, but tells us how the OIPC feels about the assurance gained from paying a ransom to recover stolen data.

See e.g. P2020-ND-201 (File #017205).

BCCA – Open courts principle does not provide for “automatic and immediate” access to court records

On December 9th, the Court of Appeal for British Columbia rejected a media challenge that alleged that the Court’s access policy violates section 2(b) of the Canadian Charter of Rights and Freedoms because it precludes wholly unfettered inspection of court records. The Court held that the Charter guarantees no such right, which would be inconsistent with a court’s responsibility for supervising the handling of its records.

The policy that the applicants challenged requires those seeking access to court records in criminal appeals to fill out of a form. Based on the content of completed forms, the Registrar may refer the request to the Chief Justice, who may seek input from the parties. In practice, if parties who are consulted don’t agree that the Court should provide access, those seeking access must file a formal application for access.

The media brought its application after the Court denied administrative access to records (filed in an application for bail pending appeal) that involved the investigation of a police officer for sexual misconduct. The media argued that the policy reverses the burden of justification provided for by Dagenais/Mentuck.

Chief Justice Bauman disagreed, stating:

Unfettered public access to court records is not the promise of the open court principle. That access is subject to supervision by the court, in recognition of the need to protect social values of superordinate importance. Judges have the discretion to order restrictions on access, exercised within the boundaries set by the principles of the Charter:

There is also nothing unlawful, Chief Justice Bauman held, in requiring requesters to confront a matter of administration (which was not associated with any proven material delay) in order to relieve the parties to a proceeding from preemptively seeking a sealing order.

R. v. Moazami, 2020 BCCA 350 (CanLII).

Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students

I’ve had a long time interest in threat assessment and its application by educational institutions in managing the risk of catastrophic physical violence, though it has been a good ten years since the major advances in Canadian institutional policy. Here is a pointer to a journal article about an apparent new United States trend – automated monitoring of online and social media posts for threat assessment purposes.

Author Amy B. Cyphert starts with an illustrative scenario that’s worth quoting in full:

In 2011, a seventeen–year–old named Mishka,1 angry that his friends had recently been jumped in a fight, penned a Facebook post full of violence, including saying that his high school was “asking for a [expletive] shooting, or something.” Friends saw the post and alerted school officials, who contacted the police. By the time psychologist Dr. John Van Dreal, who ran the Safety and Risk Management Program for Mishka’s Oregon public school system, arrived, Mishka was in handcuffs.4 Mishka and his classmates were lucky: their school system employed a risk management program, and Dr. Van Dreal was able to help talk with Mishka about what caused him to write the post. Realizing that Mishka had no intention of harming anyone, Dr. Van Dreal helped Mishka avoid being charged with a criminal offense. Dr. Van Dreal also arranged for him to attend a smaller school, where he found mentors, graduated on time, and is today a twenty–five–year–old working for a security firm.

Had Mishka’s story happened today, just eight short years later, it might have looked very different. First, instead of his friends noticing his troubled Facebook post and alerting his school, it might have been flagged by a machine learning algorithm developed by a software company that Mishka’s school paid
tens of thousands of dollars to per year. Although Mishka’s post was clearly alarming and made obvious mention of possible violence, a post flagged by the algorithm might be seemingly innocuous and yet still contain terms or features that the algorithm had determined are statistically correlated with a higher likelihood of violence. An alert would be sent to school officials, though the algorithm would not necessarily explain what features about the post triggered it. Dr. Van Dreal and the risk management program? They might have been cut in order to pay for the third-party monitoring conducted by the software company. A school official would be left to decide whether Mishka’s post warranted some form of school discipline, or even a referral to the authorities.

Cyphert raises good questions about the problem of bias associated with algorithmic identification and about the impact of monitoring and identification on student expression, privacy and equality rights.

My views are quite simple.

I set aside algorithmic bias as a fundamental concern because the baseline (traditional threat assessment) is not devoid of its own problems of bias; technology could, at least in theory, lead to more fair and accurate assessments.

I also put my main concern on the matter of efficacy. Nobody disputes that schools and higher education institutions should passively receive threat reports from community members. My questions. Has the accepted form of surveillance failed? What is the risk passive surveillance will fail? How will it fail? To what degree? Does that risk call for a more aggressive, active monitoring solution? Is there an active monitoring solution that is likely to be effective, accounting concerns about bias?

If active internet monitoring cannot be shown to be reasonably necessary, however serious the problem of catastrophic physical violence, I question whether it can be either legally justifiable or required in order to meet the standard of care. Canadian schools and institutions who adopt new threat surveillance technology because it may be of benefit, without asking the critical questions above may invite a new standard of care with tenuous underpinnings.

Cyphert, Amy B. (2020) “Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students,” Nevada Law Journal: Vol. 20 : Iss. 2 , Article 4.
Available at: https://scholars.law.unlv.edu/nlj/vol20/iss2/4

The Five Whys, the discomfort of root cause analysis and the discipline of incident response

Here is a non-law post to pass on some ideas about root cause analysis, The Five Whys, and incident response.

This is inspired by having finished reading The Lean Startup by Eric Ries. It’s a good book end-to-end, but Ries’ chapter on adaptive organizations and The Five Whys was most interesting to me – inspiring even!

The Five Whys is a well-known analytical tool that supports root cause analysis. Taichii Ohno, the father of the Toyota Production System, described it as “the basis of Toyota’s scientific approach.” By asking why a problem has occurred five times – therefore probing five causes deep – Ohno says, “the nature of the problem as well as its solution becomes clear.” Pushing to deeper causes of a failure is plainly important; if only the surface causes of a failure are addressed, the failure is near certain to recur.

Reis, in a book geared to startups, explains how to use The Five Whys as an “automatic speed regulator” in businesses that face failures in driving rapidly to market. The outcome of The Five Whys process, according to Ries, is to make a “proportional” investment in corrections at each five layers of the causal analysis – proportional in relation to to the significance of the problem.

Of course, root cause analysis is part of security incident response. The National Institute of Standards and Technology suggests that taking steps to prevent recurrences is both part of eradication and recovery and the post-incident phase. My own experience is that root cause analysis in incident response is often done poorly – with remedial measures almost always targeted at surface level causes. What I did not understand until reading Ries, is that conducting the kind of good root cause analysis associated with The Five Whys is HARD.

Ries explains that conducting root cause analysis without a strong culture of mutual trust can devolve into The Five Blames. He gives some good tips on how to implement The Five Whys despite this challenge: establishing norms around accepting the first mistake, starting with less than the full analytical process and using a “master” from the executive ranks to sponsor root cause analysis.

From my perspective, I’ll now expect a little less insight out of clients who are in the heat of crises. It may be okay to go a couple levels deep while an incident is still live and while some process owners are not even apprised of the incident – just deep enough to find some meaningful resolutions to communicate to regulators and other stakeholders. It may be okay to tell these stakeholders “we will [also] look into our processes and make appropriate improvements to prevent a recurrence” – text frequently proposed by clients for notification letters and reports.

What clients should do, however is commit to conducting good root cause analysis as part of the post-incident phase:

*Write The Five Whys into your incident response policy.

*Stipulate that a meeting will be held.

*Stipulate that everyone with a share of the problem will be invited.

*Commit to making a proportional investment to address each identified cause.

Ries would lead us to believe that this will be both unenjoyable yet invaluable – good reason to use your incident response policy to help it become part of your organization’s discipline.

Federal Court of Appeal – litigation database privileged, no production based on balancing

On October 20th, the Federal Court of Appeal set aside an order that required the federal Crown to disclose the field names it had used in its litigation database along with the rules used to populate the fields. It held the order infringed the Crown’s litigation privilege.

The case management judge made the order in a residential schools abuse class action. The Crown had produced approximately 50,000 documents, with many more to come. The plaintiffs sought the fields and rules (and not the data in the fields) to facilitate their review. The case management judge, though acknowledging litigation privilege, judged the fields and rules as less revealing than the data in the fields and ordered production in the name of efficient procedure.

The Court of Appeal held that the case management judge erred because they “subordinated the Crown’s substantive right to litigation privilege to procedural rules and practice principles.” It also held, “a party attempting to defeat litigation privilege must identify an exception to litigation privilege and not simply urge the Court to engage in a balancing exercise on a case-by-case basis.”

Canada v. Tk’emlúps te Secwépemc First Nation, 2020 FCA 179 (CanLII).