The union right of access to information

I’ve done a fair deal of enjoyable work on matters relating to a union’s right of access to information – be it under labour law, health and safety law (via union member participation in the health and safety internal responsibility system) or via freedom of information law. Today I had the pleasure of co-presenting to the International Municipal Lawyers Association on the labour law right of access with my colleague from the City of Vaughan, Meghan Ferguson.

Our presentation was about how the labour law right has fared against employee privacy claims. In short, it has fared very well, and arguably better in Ontario than in British Columbia.

I don’t believe the dialogue between labour and management is over yet, however, especially as unions push for greater access at the same time privacy sensitivities are on the rise. The advent of made-in-Ontario privacy legislation could be an impetus for a change, not because it is likely to provide employees with statutory privacy rights as much as because the new legislation could apply directly to unions. So stay tuned, and in the interim please enjoy the slides below.

The current state of FOI

Here is a deck I just put together for the The Osgoode Certificate in Privacy & Cybersecurity Law that gives a high-level perspective on the state of FOI, in particular given (a) the free flow of information that can eviscerate practical obscurity and (b) the serious cyber threat that’s facing our public institutions. As I said in the webinar itself, I’m so pleased that Osgoode PDP has integrated an FOI unit into into its privacy and cyber program given it is such a driver of core “information law.”

For related content see this short paper, Threat Exchanges and Freedom of Information Legislation, 2019 CanLIIDocs 3716. And here’s a blog post from the archives that with some good principled discussion that I refer to – Principles endorsed in Arar secrecy decision.

Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students

I’ve had a long time interest in threat assessment and its application by educational institutions in managing the risk of catastrophic physical violence, though it has been a good ten years since the major advances in Canadian institutional policy. Here is a pointer to a journal article about an apparent new United States trend – automated monitoring of online and social media posts for threat assessment purposes.

Author Amy B. Cyphert starts with an illustrative scenario that’s worth quoting in full:

In 2011, a seventeen–year–old named Mishka,1 angry that his friends had recently been jumped in a fight, penned a Facebook post full of violence, including saying that his high school was “asking for a [expletive] shooting, or something.” Friends saw the post and alerted school officials, who contacted the police. By the time psychologist Dr. John Van Dreal, who ran the Safety and Risk Management Program for Mishka’s Oregon public school system, arrived, Mishka was in handcuffs.4 Mishka and his classmates were lucky: their school system employed a risk management program, and Dr. Van Dreal was able to help talk with Mishka about what caused him to write the post. Realizing that Mishka had no intention of harming anyone, Dr. Van Dreal helped Mishka avoid being charged with a criminal offense. Dr. Van Dreal also arranged for him to attend a smaller school, where he found mentors, graduated on time, and is today a twenty–five–year–old working for a security firm.

Had Mishka’s story happened today, just eight short years later, it might have looked very different. First, instead of his friends noticing his troubled Facebook post and alerting his school, it might have been flagged by a machine learning algorithm developed by a software company that Mishka’s school paid
tens of thousands of dollars to per year. Although Mishka’s post was clearly alarming and made obvious mention of possible violence, a post flagged by the algorithm might be seemingly innocuous and yet still contain terms or features that the algorithm had determined are statistically correlated with a higher likelihood of violence. An alert would be sent to school officials, though the algorithm would not necessarily explain what features about the post triggered it. Dr. Van Dreal and the risk management program? They might have been cut in order to pay for the third-party monitoring conducted by the software company. A school official would be left to decide whether Mishka’s post warranted some form of school discipline, or even a referral to the authorities.

Cyphert raises good questions about the problem of bias associated with algorithmic identification and about the impact of monitoring and identification on student expression, privacy and equality rights.

My views are quite simple.

I set aside algorithmic bias as a fundamental concern because the baseline (traditional threat assessment) is not devoid of its own problems of bias; technology could, at least in theory, lead to more fair and accurate assessments.

I also put my main concern on the matter of efficacy. Nobody disputes that schools and higher education institutions should passively receive threat reports from community members. My questions. Has the accepted form of surveillance failed? What is the risk passive surveillance will fail? How will it fail? To what degree? Does that risk call for a more aggressive, active monitoring solution? Is there an active monitoring solution that is likely to be effective, accounting concerns about bias?

If active internet monitoring cannot be shown to be reasonably necessary, however serious the problem of catastrophic physical violence, I question whether it can be either legally justifiable or required in order to meet the standard of care. Canadian schools and institutions who adopt new threat surveillance technology because it may be of benefit, without asking the critical questions above may invite a new standard of care with tenuous underpinnings.

Cyphert, Amy B. (2020) “Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students,” Nevada Law Journal: Vol. 20 : Iss. 2 , Article 4.
Available at: https://scholars.law.unlv.edu/nlj/vol20/iss2/4

BCCA denies access to total costs spent on a litigation matter

On August 21st, the Court of Appeal for British Columbia held that a requester had not rebutted the presumption of privilege that applied to the total amount spent by government in an ongoing legal dispute. 

The Court first held that the presumptive privilege for total legal costs recognized by the Supreme Court of Canada in Maranda v Richer applies in the civil context. Then, in finding the requester had not rebutted the privilege, the Court engaged in detailed discussion about how the timing of the request and the surrounding context will weigh in the analysis.

The Court’s analysis is as complex as it is lengthy. Ultimately, the outcome rested most heavily on (a) the timing of the request (early into trial), (b) the identity of the requester (who was a party) and (c) the degree of information about the matter available to the public (which was high). The Court felt these factors supported the making of strong enough inferences about confidential solicitor-client communications that sustaining privilege was warranted.

More generally, the decision stresses the presumption of privilege and associated onus of proof. Despite Maranda, it is easy to think that total legal fees spent on matter are accessible subject to the privilege holder’s burden of justification. Precisely the opposite is true.

British Columbia (Attorney General) v. Canadian Constitution Foundation, 2020 BCCA 238 (CanLII).

Arbitration board dismisses spoliation motion

On May 6th, the Ontario Grievance Settlement Board dismissed a union motion for the ultimate spoliation remedy – granting of a grievance based on an abuse of process.

The Union made its motion in a seemingly hard fought discipline and discharge case. The Union’s pursuit of electronically stored information “to review the life cycle of certain documents that were exhibits in order to test the integrity and reliability of the documents” began after the employer had put its case in through 40 days of witness testimony. The ESI motion itself took 13 days, and at some point the employer agreed to conduct a forensic examination of certain data. Unfortunately, just before it was about to pull the data, three computers were wiped as part a routine hardware renewal process. Ooops.

Based on two more hearing days the Board held the destruction of the data was inadvertent and not even negligent. Arbitrator Petryshen said:

It is not surprising that the Employer or FIT did not arrange for the imaging of the three bailiff computers prior to September of 2017 because no one considered that there was a risk of losing that data.  Although management at the OTO unit and FIT knew that government computers were replaced every four years, it was reasonable for OTO management to expect that they would be notified when the computers in OTO unit were about to be refreshed. 

Although this is quite forgiving, Arbitrator Petryshen’s finding that the “the granting of grievances due to a loss of potentially relevant documents is an extraordinary remedy” is quite consistent with the prevailing law. In 2006, the Court of Appeal for Ontario quashed an arbitration award that allowed a grievance based on an employer’s inadvertent destruction of relevant evidence, and the Court of Appeal for Alberta’s leading decision in Black & Decker says that even negligent destruction of relevant evidence will not amount to an abuse of process.

Ontario Public Service Employees Union (Pacheco) v Ontario (Solicitor General), 2020 CanLII 38999 (ON GSB).

Let’s help our public health authorities by giving them data

This was not the title of the panel I sat on at the Public Service Information Community Connection virtual “confab” today, though it does show the view that I attempted to convey.

John Wunderlich moderated a good discussion that involved Frank Work, Ian Walsh and me. When I haven’t yet formed ideas on a subject, I prepare by creating written remarks, which are typically more lucid then what ends up coming out live! I’ve left you my prepared remarks below, and here are some of the good insights I gained from the discussion:

      • The need for transparency may warrant stand-alone legislation
      • The lack of voice in favour of government data use is not atypical
      • The enhancement of tracing efforts is a narrow public health use
      • The SCC’s privacy jurisprudence ought to foster public trust

All in all, I sustain the view recorded in the notes below: governments should get it done now by focusing on the enhancement of manual contract tracing. Build the perfect system later, but do something simple and privacy protective and learn from it. The privacy risks of this centralizing data from contact tracing apps are manageable and should be managed.

Given that public health authorities already have the authority to collect personal data for reportable diseases, what are the reasonable limits that should be put on COVID-19 data collection and sharing by applications?

It’s not yet a given that we will adopt an approach that will give public health authorities access to application data even though (as your question notes) they are designated by law as the trusted entity for receiving sensitive information about reportable diseases – diagnostic information first and foremost, but also all the very sensitive data that public health authorities regularly collect through public health investigations and manual contact tracing.

What we have here is an opportunity to help those trusted entities better perform their responsibility for tracing the disease. That responsibility is widely recognized as critical but is also at risk of being performed poorly due to fluctuating and potentially heavy demand and resource contraints. Based on a ratio I heard on a Washington Post podcast the other day, Canada’s population of 37 million could use 11,000 contract tracers. From my perspective, the true promise of an app is to help a much smaller population of contract tracers trace and give direction faster.

The most important limit, then, is data minimization. Yes collect data centrally, but don’t collect location data if proximity data will support real efficiency gains in manual contact tracing. Set other purposes aside for the post-pandemic period. Collect data for a limited period of time – perhaps 30 days. Then layer on all your ordinary data security and privacy controls.

Assuming that COVID-19 applications require broad population participation, should or can provincial or federal authorities mandate (or even request) their installation by citizens?

It’s too early to say, though government would be challenged to make a case for mandating installation and use of an application because the data collection would likely be a “search” that must be a “reasonable” search so not to infringe section 8 of the Charter.

To briefly explain the law, there are three distinct legal questions or issues.

First, there needs to be a “search,” which will likely be the case because the data we need to collect will attract a reasonable expectation of privacy.

Second, the search needs to be “reasonable.” If a search is reasonable, it’s lawful: end of analysis.

And, third, a search that is unreasonable can nonetheless be justified as a reasonable limit prescribed by law as can be demonstrably justified in a free and democratic society.

You can’t do the legal analysis until you have a design and until you understand the benefits and costs of the design. It’s quite possible that good thinking is being done, but publicly at least, we still seem to be swimming in ideas rather than building a case and advocating for a simple, least invasive design. We need to do that to cut through the scary talk about location tracking and secondary uses that has clearly found an audience and that may threaten adoption of the optimal policy.

What will be or should be the lasting change that we see coming out of COVID-19, technology and contact tracing?

What I’ve seen in my practice and what you may not realize is that employers are all in control of environments and are actually leading in identifying the risk of infection. Employers will often identify someone who is at risk of infection three, four or five or more days before a diagnosis is returned. They are taking very important action to control the spread of infection during that period without public health guidance. 

Then we have the potential launch of de-centralized “exposure notification” applications, where the direction to individuals will come from the app alone. To make an assessment of risk based on proximity data alone – without the contextual data collected and relied upon by manual contact tracers – is to make quite a limited assessment. It must be that app-driven notifications will be set to notify of exposure when the risk of infection is low, but such notifications will have a broad impact. That is, they will cause people to be pulled out of workplaces and trigger the use of scarce public health resources.

This activity by employers and (potentially) individuals is independent of activity by public health authorities – the entities who are authorized by law to do the job but who also may struggle to do it because of limited resources.

Coming out of this, I’d like us to have resolved this competition for resources and peoples’ attention and to have built a well-coordinated testing and tracing system that puts the public health authorities in control and with the resources and data they need.

Four data security points for pandemic planners who are addressing the coronavirus

Organizations currently engaged in pandemic planning ought to consider the data and cybersecurity risks associated with the rapid adoption of telework. Planning should start now, with the following considerations in mind.

Remote access risks. Secure remote access should continue to be a requirement. In general, this means access through a virtual private network and multi-factor authentication. Though understandable, “band aid” solutions to enable remote access that depart from this requirement represent a significant risk. Some departure may be necessary, though all risks should be measured. In general, any solution that rests on the use of remote desktop protocol over the internet should be considered very high risk.

Data leakage risks. Efforts should be made to keep all data classified as non-public on the organization’s systems. This can be established by issuing hardware to take home or through secure remote access technology. The use of personal hardware is an option that should used together with a well-considered BYOD policy. Printing and other causes of data leakage should be addressed through administrative policy or direction. Consider providing direction on where and how to conduct telephone calls in a confidential manner.

Credential risks. New classes of workers may need to be issued new credentials. Although risks related to poor credential handling can be mitigated by the use of multi-factor authentication, clear and basic direction on password use may be warranted. Some have said that phishing attacks may increase in light of an increase in overall vulnerability as businesses deploy new systems and adjust. While speculative, a well-timed reminder of phishing risks may help.

Incident response risks. Quite simply, will your incident response plan still function when the workforce is dispersed and when key decision-makers may be sick? Who from IT will be responsible for coming on-site? How long will that take? If decision-makers are sick, who will stand in? These questions are worth asking now.

Hat tip to my colleague Matin Fazelpour for his input on this post.

The twelve security failures underscoring the ICO’s recent £500,000 fine

On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):

    • The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
    • The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
    • The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
    • System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
    • Cathay Pacific could not provide evidence of adequate server hardening.
    • Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
    • The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
    • Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
    • Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
    • Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
    • Penetration  testing  was  inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
    • Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.

£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.

Notable snippet about the personal information concept in recent Ont CA search case

On January 13th, the Court of Appeal for Ontario held that a convicted appellant did not have a reasonable expectation of privacy in “what could be seen and heard on [his] property from his neighbour’s [property].”

The police trespassed on an neighbour’s rural property to conduct surveillance, and they heard gunshots and saw two individuals with rifles outside of the appellant’s house. Based on these observations, the police obtained a warrant to search the appellant’s house. They ultimately secured one or more convictions on drug and weapons charges.

The Court held, that in the context, it did not matter that the police were trespassing. (The gunshots were loud, and the appellant’s property was abutted by a public road in any event.) It also held that the police did not obtain “personal information,” reasoning as follows:

What triggered the application for the first warrant was the sound of the discharge of a firearm – something that could scarcely be concealed – coupled with visual observations of persons outdoors either firing a rifle or holding a rifle. These were bare observations of physical acts. There was no personal information obtained.

This illustrates how the personal information concept is not as simple, and perhaps not as broad, as one might think. The facts observed clearly allowed the police to infer what was in the house and obtain, on the reasonable and probable grounds standard, a search warrant. Nonetheless, the Court held that the observations did not invite a collection of personal information.

R v Roy, 2020 ONCA 18 (CanLII).

Ont CA articulates detriment requirement for a breach of confidence claim

On December 24th, the Court of Appeal for Ontario affirmed the dismissal of a breach of confidence claim because the plaintiff did not make out a “detriment.” Despite its affirmation, the Court held that the trial judge erroneously said that a breach of confidence plaintiff must prove “financial loss.” It explained, “The concept of detriment is not tied to only financial loss, but is afforded a broad definition, including emotional or psychological distress, loss of bargaining advantage, and loss of potential profits.”

CTT Pharmaceutical Holdings, Inc. v. Rapid Dose Therapeutics Inc., 2019 ONCA 1018 (CanLII).