DFS report shows how to double down on remote access security

On October 15th, the New York State Department of Financial Services issued a report on the June 2020 cybersecurity incident in which a 17-year old hacker his friends gained access to Twitter’s account management tools and hijacked over 100 accounts.

The report stresses the critical risk against which social media companies employ their security measures and the simplicity of the hacker’s methods. The DFS raises the link between social media account security and election security and also notes that the S&P500 lost $135.5 billion in value in 2013 when hackers tweeted false information from the Associated Press’s Twitter account. Despite this risk, the 2020 hackers gained access based on a well-executed but simple social engineering campaign, without the aide of malware, exploits or backdoors.

The hackers conducted intelligence. They impersonated the Twitter IT department and called employees to help with VPN problems, which were prevalent following Twitter’s shift to remote work. The hackers directed employees to a fake login page, which allowed them to capture credentials and circumvent multifactor authentication.

The event lasted about 24 hours. The DFS explains that Twitter employed a password re-set protocol that required every employee to attend a video conference with a supervisor and manually change their passwords.

The event and the report are about the remote workforce risk we face today. Twitter had all the components of a good defence in place, but according to the DFS it could have done better given the high consequences of a failure. Here is a summary of some of the DFS recommendations:

  • Employ stricter privilege limitations, with access being re-certified regularly. Following the incident Twitter did just this, even though it apparently slowed down some job functions.
  • While multifactor authentication is a given, the DFS noted, “Another possible control for high-risk functions is to require certification or approval by a second employee before the action can be taken.”
  • The DFS points out that not all multifactor authentication is created equal: “The most secure form of MFA is a physical security key, or hardware MFA, involving a USB key that is plugged into a computer to authenticate users.”
  • The DFS says organizations should establish uniform standards of communications and educate employees about them. Employees should know, for example, exactly how the organization will contact them about suspicious account activity.
  • The DFS endorses “robust” monitoring via security information and event management systems – monitoring in “near real-time.”

These recommendations could make for very strong remote access and account security, but are worth note.

Report on Investigation of Twitter’s July 15, 2020 Cybersecurity Incident and the Implications for Election Security.

OPC issues significant findings in response to online reputation complaint

The IPC recently responded to a complaint by a dentist about the the RateMDs review site, at which several individuals purporting to be her patients had posted anonymous reviews. The OPC findings are significant favor the public’s right of expression over doctors’ interest in personal privacy.

The OPC first held that RateMDs did not need the complainant’s consent to publish the reviews because the reviews constituted so-called “mixed personal information” – a term used by the IPC/Ontario to refer to personal information that relates to more than one individual. The Federal Court of Appeal test from Pirrie calls for a very contextual balancing of interests in addressing access requests for such information. In this case, the OPC applied a similar approach to deny the complainant the ability to block the publication of others’ opinions about her. It said:

Giving effect to the Complainant’s lack of consent would mean the interests of the patients who are consenting to the publication of their reviews and ratings would not be respected, and the benefits to the public more broadly would be negated. We are therefore of the view, based on a balancing of interests of the Complainant with those of the reviewers and the public more generally, that this aspect of the complaint is not well-founded.

The OPC held that RateMDs’ accuracy and correction obligations under PIPEDA require it to correct ratings that are inaccurate, incomplete or out-of-date. However, it also acknowledged that challenging the inaccuracy of an anonymous review is difficult and held that that PIPEDA will “generally” prohibit review sites like RateMDs from disclosing the identity of anonymous reviewers.

Finally, OPC held, that RateMDs should discontinue a paid service that allowed doctors to hide up to three reviews “deemed to be suspicious.” While this finding is understandable, it is ironic that a privacy regulator has applied our commercial privacy statute to take away a potential privacy remedy. All in all, that is what this finding does: it makes clear that PIPEDA is not an effective remedy for challenging seemingly fair reviews posted on a bona fide review site. Those aggrieved must go to court and sue in defamation or (if they are up for a challenge) breach of privacy.

PIPEDA Report of Findings #2020-002, June 30, 2020.

Privacy violation arises out of failure to notify of FOI request

On September 21st, the Information and Privacy Commissioner/Ontario held that a municipality breached the Municipal Freedom of Information and Protection of Privacy Act by failing to notify an affected person of an FOI request.

The complainant discovered that the municipality had released e-mails he had sent to councilors about a planning matter in responding to FOI requests and without providing notice. MFIPPA requires notification of a request for records containing personal information if the head has “reason to believe” their release “might constitute an unjustified invasion of personal privacy.”

The IPC held that the municipality had not met this requirement. It reasoned:

As indicated above, the County disclosed the complainant’s name, address and views and opinions about Hastings Drive without notifying him pursuant to section 21(1)(b). Given the nature of the complainant’s personal information at issue, in my view, the disclosure of at least some of this information might have constituted an unjustified invasion of his personal privacy.

In my view, the complainant should have been notified and given an opportunity to make representations as to why the Emails should not have been disclosed. As noted in Investigation Report MC-000019-1, except in the clearest of cases, fairness requires that the person with the greatest interest in the information, that is, the complainant, be given a chance to be heard. In this matter, he was not given that opportunity.

The complainant had sent his e-mails to politicians about a matter of apparent public interest. The standard for notification is low, but the notice requirement here was at least debatable.

Unfortunately, the IPC does not address the balancing of interests contemplated by the unjustified invasion exemption. For notice to be required there must be “a reason to believe” – a reason based on a provisional application of the unjustified invasion exemption. “Clearest of cases” is not the legal test, and it is wrong to notify simply because “at least some” information responsive to a request is bound to trigger the notification requirement.

This is a mild warning to institutions. There is a statutory immunity that offers some protection from civil claims for failure to notify, but the IPC has shown itself to be strict.

PRIVACY COMPLAINT MC17-35, 2020 CanLII 72822 (ON IPC).

UK Court of Appeal causes re-set for facial recognition surveillance

On September 11th, the England and Wales Court of Appeal held that the South Wales Police Force violated Article 8 of the European Convention on Human Rights and the UK Equality Act 2018 by using facial recognition software on two occasions. The finding is narrow, though, and leaves facial recognition technology open to police use.

The police piloted facial recognition technology on two occasions. They were governed by the Data Protection Act 2018, a surveillance “code of practice” issued under the protection of Freedoms Act 2012 and written local police policy. The police also conducted a data protection impact assessment and a (somewhat limited) equality impact assessment.

The police conducted overt facial recognition surveillance under this framework based on pre-deployment notice made, in part, via advertising and via notices posted on the police cars equipped with facial recognition cameras. On two occasions the police collected images for an entire day and matched the images against images in “watch lists” comprised of persons wanted on warrants, persons identified as suspects and other persons of interest. The police used human validation to screen matches, which led them to make two arrests on one occasion and no arrests on another. Significantly, the police immediately disposed of images of all persons who did not match.

The Court found the deployment to have been unlawful based on two problems, both problems of process rather than fundamental problems.

First, the Court held that the deployments were not sufficiently prescribed by law to justify an infringement of Article 8 (which protects the right to privacy). More specifically, it held that the legal framework for the deployments left too much discretion to the police as to who may be placed on a watch list, in particular for intelligence gathering purposes. The police failure to reckon with this aspect of the technology and surveillance program also led the Court to conclude that its data privacy impact assessment was inadequate.

Second, the Court held that the police did not conduct an adequate equality impact assessment, which it held requires “the taking of reasonable steps to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex.” The police ought to have, the Court said, assessed the facial recognition software to determine if it resulted in “unacceptable bias,” even if human validation was to be a feature of the matching process.

Notably, the Court held (in obiter) that the police infringement of Article 8 rights was justifiable in regards to the relative consequences and benefits of the surveillance scheme, calling the impact on Article 8 rights “negligible.”

As noted, this leaves facial recognition technology open to police use in the UK. Use for intelligence gathering purposes may be more questionable than use for investigatory purposes.

Bridges, R (On the Application Of) v South Wales Police [2020] EWCA Civ 1058 (11 August 2020).

BCCA denies access to total costs spent on a litigation matter

On August 21st, the Court of Appeal for British Columbia held that a requester had not rebutted the presumption of privilege that applied to the total amount spent by government in an ongoing legal dispute. 

The Court first held that the presumptive privilege for total legal costs recognized by the Supreme Court of Canada in Maranda v Richer applies in the civil context. Then, in finding the requester had not rebutted the privilege, the Court engaged in detailed discussion about how the timing of the request and the surrounding context will weigh in the analysis.

The Court’s analysis is as complex as it is lengthy. Ultimately, the outcome rested most heavily on (a) the timing of the request (early into trial), (b) the identity of the requester (who was a party) and (c) the degree of information about the matter available to the public (which was high). The Court felt these factors supported the making of strong enough inferences about confidential solicitor-client communications that sustaining privilege was warranted.

More generally, the decision stresses the presumption of privilege and associated onus of proof. Despite Maranda, it is easy to think that total legal fees spent on matter are accessible subject to the privilege holder’s burden of justification. Precisely the opposite is true.

British Columbia (Attorney General) v. Canadian Constitution Foundation, 2020 BCCA 238 (CanLII).

IPC wades into shadow IT mess, may never again

The Information and Privacy Commissioner/Ontario issued a decision about a security incident on July 9th in which it made clear, after participating in a health information custodians’ efforts to recover lost data, that this burden falls on custodians alone.

The incident involved a clinician at an unnamed rehabilitation clinic and her estranged spouse, who reported to the clinic that he possessed 164 unique files containing the personal health information of 46 clinic clients on two computers that belonged to the clinician. The clinician explained the existence of the files as a by-product of secure access and inadvertent, though the the files appear to have been purposely moved from temporary storage to a Google drive at some point, possibly by the spouse

The spouse was not particularly cooperative. This led the IPC, who the clinic had notified, to engage with the spouse together with the clinic over a several month period. The IPC took the (questionable) position that the spouse was in breach of duties under section 49(1) of PHIPA.

In the course of these dealings the spouse reported he had also received e-mails with attached assessment reports from the clinician for printing purposes. The clinician said she had thought she had adequately de-identified the reports, though one included a full patient name and others (as the IPC held) contained ample data to render patients identifiable.

All of the detritus was eventually deleted to the satisfaction of the clinic and IPC. The clinic reconfigured its means of providing secure remote access to adresses the risk of local storage and beefed up its administrative policies and training. There is no mention of implementing a digital loss prevention solution.

The IPC decision is notable for two points.

First, the IPC made clear that custodians should not rely on the IPC to help with data recovery (which can be very expensive):

It is clear that interactions between the Clinic and the Spouse had been very challenging, chiefly due to the Spouse’s changing positions throughout this investigation. However, the obligations on a health information custodian to contain the breach remain, even in the face of challenging circumstances.  The Privacy Breach Guidelines are clear that there is an obligation on the health information custodian to retrieve any copies of personal health information that have been disclosed and ensure that no copies of personal health information have been made or retained by anyone who was not authorized to receive the information.  Nothing in the legislation or these guidelines transfers this obligation to the IPC.

Second, the clinic was less skeptical of the clinician than it might otherwise have been, and did not issue discipline. The IPC accepted this, and re-stated its deferential position on employee discipline as follows:

With respect to the Clinic’s decision, I am satisfied that it was reasonable in the circumstances. This office has stated that its role is not to judge the severity or appropriateness of sanctions taken by a custodian against its agents (see PHIPA Decision 74).  However, the IPC can taken into account a custodian’s disciplinary response as part of its assessment of whether the custodian has taken reasonable steps to protect personal health information against unauthorized access.

A Rehabilitation Clinic (Re), 2020 CanLII 45770 (ON IPC).

Arbitration board dismisses spoliation motion

On May 6th, the Ontario Grievance Settlement Board dismissed a union motion for the ultimate spoliation remedy – granting of a grievance based on an abuse of process.

The Union made its motion in a seemingly hard fought discipline and discharge case. The Union’s pursuit of electronically stored information “to review the life cycle of certain documents that were exhibits in order to test the integrity and reliability of the documents” began after the employer had put its case in through 40 days of witness testimony. The ESI motion itself took 13 days, and at some point the employer agreed to conduct a forensic examination of certain data. Unfortunately, just before it was about to pull the data, three computers were wiped as part a routine hardware renewal process. Ooops.

Based on two more hearing days the Board held the destruction of the data was inadvertent and not even negligent. Arbitrator Petryshen said:

It is not surprising that the Employer or FIT did not arrange for the imaging of the three bailiff computers prior to September of 2017 because no one considered that there was a risk of losing that data.  Although management at the OTO unit and FIT knew that government computers were replaced every four years, it was reasonable for OTO management to expect that they would be notified when the computers in OTO unit were about to be refreshed. 

Although this is quite forgiving, Arbitrator Petryshen’s finding that the “the granting of grievances due to a loss of potentially relevant documents is an extraordinary remedy” is quite consistent with the prevailing law. In 2006, the Court of Appeal for Ontario quashed an arbitration award that allowed a grievance based on an employer’s inadvertent destruction of relevant evidence, and the Court of Appeal for Alberta’s leading decision in Black & Decker says that even negligent destruction of relevant evidence will not amount to an abuse of process.

Ontario Public Service Employees Union (Pacheco) v Ontario (Solicitor General), 2020 CanLII 38999 (ON GSB).

Let’s help our public health authorities by giving them data

This was not the title of the panel I sat on at the Public Service Information Community Connection virtual “confab” today, though it does show the view that I attempted to convey.

John Wunderlich moderated a good discussion that involved Frank Work, Ian Walsh and me. When I haven’t yet formed ideas on a subject, I prepare by creating written remarks, which are typically more lucid then what ends up coming out live! I’ve left you my prepared remarks below, and here are some of the good insights I gained from the discussion:

      • The need for transparency may warrant stand-alone legislation
      • The lack of voice in favour of government data use is not atypical
      • The enhancement of tracing efforts is a narrow public health use
      • The SCC’s privacy jurisprudence ought to foster public trust

All in all, I sustain the view recorded in the notes below: governments should get it done now by focusing on the enhancement of manual contract tracing. Build the perfect system later, but do something simple and privacy protective and learn from it. The privacy risks of this centralizing data from contact tracing apps are manageable and should be managed.

Given that public health authorities already have the authority to collect personal data for reportable diseases, what are the reasonable limits that should be put on COVID-19 data collection and sharing by applications?

It’s not yet a given that we will adopt an approach that will give public health authorities access to application data even though (as your question notes) they are designated by law as the trusted entity for receiving sensitive information about reportable diseases – diagnostic information first and foremost, but also all the very sensitive data that public health authorities regularly collect through public health investigations and manual contact tracing.

What we have here is an opportunity to help those trusted entities better perform their responsibility for tracing the disease. That responsibility is widely recognized as critical but is also at risk of being performed poorly due to fluctuating and potentially heavy demand and resource contraints. Based on a ratio I heard on a Washington Post podcast the other day, Canada’s population of 37 million could use 11,000 contract tracers. From my perspective, the true promise of an app is to help a much smaller population of contract tracers trace and give direction faster.

The most important limit, then, is data minimization. Yes collect data centrally, but don’t collect location data if proximity data will support real efficiency gains in manual contact tracing. Set other purposes aside for the post-pandemic period. Collect data for a limited period of time – perhaps 30 days. Then layer on all your ordinary data security and privacy controls.

Assuming that COVID-19 applications require broad population participation, should or can provincial or federal authorities mandate (or even request) their installation by citizens?

It’s too early to say, though government would be challenged to make a case for mandating installation and use of an application because the data collection would likely be a “search” that must be a “reasonable” search so not to infringe section 8 of the Charter.

To briefly explain the law, there are three distinct legal questions or issues.

First, there needs to be a “search,” which will likely be the case because the data we need to collect will attract a reasonable expectation of privacy.

Second, the search needs to be “reasonable.” If a search is reasonable, it’s lawful: end of analysis.

And, third, a search that is unreasonable can nonetheless be justified as a reasonable limit prescribed by law as can be demonstrably justified in a free and democratic society.

You can’t do the legal analysis until you have a design and until you understand the benefits and costs of the design. It’s quite possible that good thinking is being done, but publicly at least, we still seem to be swimming in ideas rather than building a case and advocating for a simple, least invasive design. We need to do that to cut through the scary talk about location tracking and secondary uses that has clearly found an audience and that may threaten adoption of the optimal policy.

What will be or should be the lasting change that we see coming out of COVID-19, technology and contact tracing?

What I’ve seen in my practice and what you may not realize is that employers are all in control of environments and are actually leading in identifying the risk of infection. Employers will often identify someone who is at risk of infection three, four or five or more days before a diagnosis is returned. They are taking very important action to control the spread of infection during that period without public health guidance. 

Then we have the potential launch of de-centralized “exposure notification” applications, where the direction to individuals will come from the app alone. To make an assessment of risk based on proximity data alone – without the contextual data collected and relied upon by manual contact tracers – is to make quite a limited assessment. It must be that app-driven notifications will be set to notify of exposure when the risk of infection is low, but such notifications will have a broad impact. That is, they will cause people to be pulled out of workplaces and trigger the use of scarce public health resources.

This activity by employers and (potentially) individuals is independent of activity by public health authorities – the entities who are authorized by law to do the job but who also may struggle to do it because of limited resources.

Coming out of this, I’d like us to have resolved this competition for resources and peoples’ attention and to have built a well-coordinated testing and tracing system that puts the public health authorities in control and with the resources and data they need.

“Employee’s” signature accessible to public – NLCA

On June 3rd, the Court of Appeal for Newfoundland and Labrador held that the signature of an “employee” who authorized a vacation leave payout to a senior administrator at a college campus in Qatar was accessible to the public even though the individual was hired by Qatar, and not the College.

The matter turned on the meaning of “employee” under Newfoundland’s now repealed and replaced FOI statute, which at the time exempted all personal information from the right of access subject to an exemption for “information… about a third party’s position, function or remuneration as an officer, employee or member of a public body.” The Court held that the term employee is broad enough to include some independent contractors. It explained:

The statutory context and the purpose of the Act, however, would appear to limit including independent contractors only to those who, by virtue of their contract, are required to perform services for the public body in a manner that involves them as a functional cog in the institutional structure of the organization. It is those persons whose personal information about position and functions which can be regarded as employees and still promote the purpose and object of the legislation. To restrict the definition further would be to shield information about certain aspects of the public body’s operations and functioning from potential public scrutiny. To expand the definition further would equally not promote the object and purpose of the Act because it would allow for disclosure of personal information that does not elucidate the institutional functioning of the public body which is to be held accountable.

The Court’s affirmation of the public’s right of access here is no surprise. For one, the record suggested that the College and Qatar were common employers. More fundamentally, the privacy interest in the signature that would justify the outcome sought by the College was simply too minimal to give its interpretation argument principled force. In Ontario, signatures made in one’s professional capacity are not even considered to be one’s personal information.

College of the North Atlantic v. Peter McBreairty and Information and Privacy Commissioner of Newfoundland and Labrador, 2020 NLCA 19.

CASL survives constitutional challenge, FCA gives some insight

Yesterday the Federal Court of Appeal held that Canada’s Anti-Spam Legislation is intra vires Parliament and Charter-compliant. In doing so it opined on the scope of numerous CASL provisions, most-notably the so called “business-to-business  exclusion.”

CASL applies coast-to-coast-to-coast – passed under the federal trade and commerce power. It is known to be both strict and inelegantly drafted because it applies very broadly but carves out areas of activity piecemeal, though numerous exemptions and exclusions.

None of this caused the Court any problem. It rejected the appellant’s division of powers attack and its attack under sections 2(b), 11, 7 and 8 of the Charter. Ultimately the Court viewed CASL as addressing an important problem of national scope and focused enough to pass muster because its scope of application is tied to “commercial activity” (a concept with sufficient meaning) and because of its numerous exemptions and exclusions: “CASL thus establishes a complex legislative scheme that evinces a considerable degree of tailoring to meet its objectives.”

More practically, the Court affirmed a CRTC finding that e-mails sent by the appellant to market training courses employees of organizations did not fit within the Act’s business-to-business exclusion, which removes commercial electronic messages from all regulation if they are sent by an organization, “to an employee, representative, consultant or franchisee of another organization if the organizations have a relationship and the message concerns the activities of the organization to which the message is sent.”

Regarding the relationship requirement, the Court agreed with the CRTC that it will not be satisfied by mere proof a prior transaction with an employee of the organization to whom a message is sent. The Court used the term “partner organization” to characterize an organization that would qualify for exclusion. It also said that the requirement for exclusion is more demanding than the requirement for being in the type of business relationship that would only trigger deemed implied consent – i.e., an existing business relationship. The Court explained:

Finding an existing business relationship in the present case would permit the appellant to send CEMs to a person—an individual—who had paid the appellant for a course within the preceding two years. Finding a relationship for the purposes of the business-to-business exemption, on the other hand, would allow the appellant to send CEMs to not only the individual who took the course, or the individual who paid for the course, but to every other employee of the organization to which those individuals belong—and organizations can be very large indeed. The latter finding would expose a great many more people to the potentially harmful conduct that it is CASL’s raison d’être to regulate. This suggests, contrary to the appellant’s argument, that the evidentiary requirements for establishing a relationship for the purposes of the business-to-business exemption should in fact be more demanding than for an existing business relationship.

Although this will limit access to the exclusion, the Court did find that phrase “concerns the activities” does not limit organizations to sending e-mails that concern only the core business operations of the recipient organization.

I’ve addressed only the Court’s most significant interpretive finding. Yesterday’s decision also addresses (a) the purpose of CASL, (b) the meaning of “commercial electronic message”, (c) the relevance of one’s job title to establishing deemed implied consent and (d) the prescribed requirements for an unsubscribe mechanism.

3510395 Canada Inc. v. Canada (Attorney General), 2020 FCA 103.