Arbitration board dismisses spoliation motion

On May 6th, the Ontario Grievance Settlement Board dismissed a union motion for the ultimate spoliation remedy – granting of a grievance based on an abuse of process.

The Union made its motion in a seemingly hard fought discipline and discharge case. The Union’s pursuit of electronically stored information “to review the life cycle of certain documents that were exhibits in order to test the integrity and reliability of the documents” began after the employer had put its case in through 40 days of witness testimony. The ESI motion itself took 13 days, and at some point the employer agreed to conduct a forensic examination of certain data. Unfortunately, just before it was about to pull the data, three computers were wiped as part a routine hardware renewal process. Ooops.

Based on two more hearing days the Board held the destruction of the data was inadvertent and not even negligent. Arbitrator Petryshen said:

It is not surprising that the Employer or FIT did not arrange for the imaging of the three bailiff computers prior to September of 2017 because no one considered that there was a risk of losing that data.  Although management at the OTO unit and FIT knew that government computers were replaced every four years, it was reasonable for OTO management to expect that they would be notified when the computers in OTO unit were about to be refreshed. 

Although this is quite forgiving, Arbitrator Petryshen’s finding that the “the granting of grievances due to a loss of potentially relevant documents is an extraordinary remedy” is quite consistent with the prevailing law. In 2006, the Court of Appeal for Ontario quashed an arbitration award that allowed a grievance based on an employer’s inadvertent destruction of relevant evidence, and the Court of Appeal for Alberta’s leading decision in Black & Decker says that even negligent destruction of relevant evidence will not amount to an abuse of process.

Ontario Public Service Employees Union (Pacheco) v Ontario (Solicitor General), 2020 CanLII 38999 (ON GSB).

Let’s help our public health authorities by giving them data

This was not the title of the panel I sat on at the Public Service Information Community Connection virtual “confab” today, though it does show the view that I attempted to convey.

John Wunderlich moderated a good discussion that involved Frank Work, Ian Walsh and me. When I haven’t yet formed ideas on a subject, I prepare by creating written remarks, which are typically more lucid then what ends up coming out live! I’ve left you my prepared remarks below, and here are some of the good insights I gained from the discussion:

      • The need for transparency may warrant stand-alone legislation
      • The lack of voice in favour of government data use is not atypical
      • The enhancement of tracing efforts is a narrow public health use
      • The SCC’s privacy jurisprudence ought to foster public trust

All in all, I sustain the view recorded in the notes below: governments should get it done now by focusing on the enhancement of manual contract tracing. Build the perfect system later, but do something simple and privacy protective and learn from it. The privacy risks of this centralizing data from contact tracing apps are manageable and should be managed.

Given that public health authorities already have the authority to collect personal data for reportable diseases, what are the reasonable limits that should be put on COVID-19 data collection and sharing by applications?

It’s not yet a given that we will adopt an approach that will give public health authorities access to application data even though (as your question notes) they are designated by law as the trusted entity for receiving sensitive information about reportable diseases – diagnostic information first and foremost, but also all the very sensitive data that public health authorities regularly collect through public health investigations and manual contact tracing.

What we have here is an opportunity to help those trusted entities better perform their responsibility for tracing the disease. That responsibility is widely recognized as critical but is also at risk of being performed poorly due to fluctuating and potentially heavy demand and resource contraints. Based on a ratio I heard on a Washington Post podcast the other day, Canada’s population of 37 million could use 11,000 contract tracers. From my perspective, the true promise of an app is to help a much smaller population of contract tracers trace and give direction faster.

The most important limit, then, is data minimization. Yes collect data centrally, but don’t collect location data if proximity data will support real efficiency gains in manual contact tracing. Set other purposes aside for the post-pandemic period. Collect data for a limited period of time – perhaps 30 days. Then layer on all your ordinary data security and privacy controls.

Assuming that COVID-19 applications require broad population participation, should or can provincial or federal authorities mandate (or even request) their installation by citizens?

It’s too early to say, though government would be challenged to make a case for mandating installation and use of an application because the data collection would likely be a “search” that must be a “reasonable” search so not to infringe section 8 of the Charter.

To briefly explain the law, there are three distinct legal questions or issues.

First, there needs to be a “search,” which will likely be the case because the data we need to collect will attract a reasonable expectation of privacy.

Second, the search needs to be “reasonable.” If a search is reasonable, it’s lawful: end of analysis.

And, third, a search that is unreasonable can nonetheless be justified as a reasonable limit prescribed by law as can be demonstrably justified in a free and democratic society.

You can’t do the legal analysis until you have a design and until you understand the benefits and costs of the design. It’s quite possible that good thinking is being done, but publicly at least, we still seem to be swimming in ideas rather than building a case and advocating for a simple, least invasive design. We need to do that to cut through the scary talk about location tracking and secondary uses that has clearly found an audience and that may threaten adoption of the optimal policy.

What will be or should be the lasting change that we see coming out of COVID-19, technology and contact tracing?

What I’ve seen in my practice and what you may not realize is that employers are all in control of environments and are actually leading in identifying the risk of infection. Employers will often identify someone who is at risk of infection three, four or five or more days before a diagnosis is returned. They are taking very important action to control the spread of infection during that period without public health guidance. 

Then we have the potential launch of de-centralized “exposure notification” applications, where the direction to individuals will come from the app alone. To make an assessment of risk based on proximity data alone – without the contextual data collected and relied upon by manual contact tracers – is to make quite a limited assessment. It must be that app-driven notifications will be set to notify of exposure when the risk of infection is low, but such notifications will have a broad impact. That is, they will cause people to be pulled out of workplaces and trigger the use of scarce public health resources.

This activity by employers and (potentially) individuals is independent of activity by public health authorities – the entities who are authorized by law to do the job but who also may struggle to do it because of limited resources.

Coming out of this, I’d like us to have resolved this competition for resources and peoples’ attention and to have built a well-coordinated testing and tracing system that puts the public health authorities in control and with the resources and data they need.

Four data security points for pandemic planners who are addressing the coronavirus

Organizations currently engaged in pandemic planning ought to consider the data and cybersecurity risks associated with the rapid adoption of telework. Planning should start now, with the following considerations in mind.

Remote access risks. Secure remote access should continue to be a requirement. In general, this means access through a virtual private network and multi-factor authentication. Though understandable, “band aid” solutions to enable remote access that depart from this requirement represent a significant risk. Some departure may be necessary, though all risks should be measured. In general, any solution that rests on the use of remote desktop protocol over the internet should be considered very high risk.

Data leakage risks. Efforts should be made to keep all data classified as non-public on the organization’s systems. This can be established by issuing hardware to take home or through secure remote access technology. The use of personal hardware is an option that should used together with a well-considered BYOD policy. Printing and other causes of data leakage should be addressed through administrative policy or direction. Consider providing direction on where and how to conduct telephone calls in a confidential manner.

Credential risks. New classes of workers may need to be issued new credentials. Although risks related to poor credential handling can be mitigated by the use of multi-factor authentication, clear and basic direction on password use may be warranted. Some have said that phishing attacks may increase in light of an increase in overall vulnerability as businesses deploy new systems and adjust. While speculative, a well-timed reminder of phishing risks may help.

Incident response risks. Quite simply, will your incident response plan still function when the workforce is dispersed and when key decision-makers may be sick? Who from IT will be responsible for coming on-site? How long will that take? If decision-makers are sick, who will stand in? These questions are worth asking now.

Hat tip to my colleague Matin Fazelpour for his input on this post.

The twelve security failures underscoring the ICO’s recent £500,000 fine

On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):

    • The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
    • The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
    • The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
    • System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
    • Cathay Pacific could not provide evidence of adequate server hardening.
    • Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
    • The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
    • Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
    • Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
    • Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
    • Penetration  testing  was  inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
    • Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.

£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.

Notable snippet about the personal information concept in recent Ont CA search case

On January 13th, the Court of Appeal for Ontario held that a convicted appellant did not have a reasonable expectation of privacy in “what could be seen and heard on [his] property from his neighbour’s [property].”

The police trespassed on an neighbour’s rural property to conduct surveillance, and they heard gunshots and saw two individuals with rifles outside of the appellant’s house. Based on these observations, the police obtained a warrant to search the appellant’s house. They ultimately secured one or more convictions on drug and weapons charges.

The Court held, that in the context, it did not matter that the police were trespassing. (The gunshots were loud, and the appellant’s property was abutted by a public road in any event.) It also held that the police did not obtain “personal information,” reasoning as follows:

What triggered the application for the first warrant was the sound of the discharge of a firearm – something that could scarcely be concealed – coupled with visual observations of persons outdoors either firing a rifle or holding a rifle. These were bare observations of physical acts. There was no personal information obtained.

This illustrates how the personal information concept is not as simple, and perhaps not as broad, as one might think. The facts observed clearly allowed the police to infer what was in the house and obtain, on the reasonable and probable grounds standard, a search warrant. Nonetheless, the Court held that the observations did not invite a collection of personal information.

R v Roy, 2020 ONCA 18 (CanLII).

Ont CA articulates detriment requirement for a breach of confidence claim

On December 24th, the Court of Appeal for Ontario affirmed the dismissal of a breach of confidence claim because the plaintiff did not make out a “detriment.” Despite its affirmation, the Court held that the trial judge erroneously said that a breach of confidence plaintiff must prove “financial loss.” It explained, “The concept of detriment is not tied to only financial loss, but is afforded a broad definition, including emotional or psychological distress, loss of bargaining advantage, and loss of potential profits.”

CTT Pharmaceutical Holdings, Inc. v. Rapid Dose Therapeutics Inc., 2019 ONCA 1018 (CanLII).

ABCA overturns Alberta OIPC’s lawyer invoices decision

Onus weighs heavily in resolving an access request for information in lawyer invoices. Given the presumption of privilege established by the Supreme Court of Canada in Maranda v Richer, an institution need only identify the information at issue as information in lawyer invoices and ride the Maranda v Richer presumption.

The University of Calgary did just that in claiming that “narrative information” in certain legal invoices was exempt from the right of public access. The OIPC rejected the University’s claim, relying on the basic statutory onus embedded in section 71(1) of the Alberta Freedom of Information and Protection of Privacy Act and noting that the University provided no affidavit evidence. Regarding Maranda v Richer, the adjudicator said:

A common law principle such as that articulated in Maranda and one which is not clearly applicable in a circumstance other than when the state is contemplating exercising a search warrant in a law office, cannot serve to supplant the clear statement of Legislative intent set out in section 71(1) – that a public body must prove its case.

The Court held that this reasoning invited a reversible error. Long live Maranda v Richer.

University of Calgary v Alberta (Information and Privacy Commissioner), 2019 ABQB 950 (CanLII).

US Secret Service issues noteworthy report on targeted school violence

The United States Secret Service has issued a follow-up to its landmark 2002 report that reinforces the need for sound institutional threat assessment procedures. The full new report is here. It is a noteworthy read for school administrators responsible for risk management and security.

Threat assessment is a process by which institutions aim to collect and process behavioral information that raises a potential concern to so, when appropriate, they can engage in “robust” intervention aimed at helping a student at risk and preventing violent acts. It has been a best practice at K-12 and post-secondary institutions in Canada for over a decade and should not be controversial, though it does invite tension with privacy and anti-discrimination laws. And though it’s very easy to understand that privacy interests and accommodation rights give way when an individual poses a risk of harm that is “serious and imminent,” good threat assessment rests on intervention at much lower risk levels. As the Secret Service’s new report states, “The threshold for intervention should be low, so that schools can identify students in distress before their behavior escalates to the level of eliciting concerns about safety.”

The Secret Service’s new report is based on an analysis of 41 incidents of targeted school violence that occurred at K-12 schools in the United States from 2008 to 2017.

The following statistic – about failures in reporting – is the first thing that caught my attention.

Canadian institutions encourage classmates to report concerning behaviors to a single point of contact and often mandate employees to make such reports. The new report tells us nothing about whether that is working in Canada, but it’s a good question to consider given the above.

The report also identifies that the attackers in four out of the 41 incidents were referred to threat assessment, which invited a response summarized in the following table:

In Cases 1 and 3 the attacker appears to have been mis-assessed. (See the full report on Case 3 here.) Cases 2 and 4 may relate to a prescription the Secret Service gives based on a statistic that showed that 17 of the 41 attacks occurred after a break in school attendance: “These findings suggest that schools should make concerted efforts to facilitate positive student engagement following discipline, including suspensions and expulsions, and especially within the first week that the student returns to school.”

Privacy, Identity, and Control: Emerging Issues in Data Protection

This post marks the official death of my reading pile, which involved a read of the current edition of the Canadian Journal of Comparative and Contemporary Law – one entitled Privacy, Identity, and Control: Emerging Issues in Data Protection.

I’m admittedly still digesting the ideas, so am just pointing to a good resource for reckoning with the Euro-centric forces that are bound to affect our law. Top reads were “Regaining Digital Privacy? The New ‘Right to be Forgotten’ and Online Expression” and Fiona Brimblecombe & Gavin Phillipson and Information “Brokers, Fairness, and Privacy in Publicly Accessible Information” by Andrea Slane. Check it out.