Cyber defence basics – Maritime Connections

I was pleased to do a cyber defence basics presentation to privacy professionals attending the Public Service Information Community Connection “Maritime Connections” event yesterday. The presentation (below) is based off of recent publications by the New York Department of Financial Services and the Information Commissioner’s Office (UK) as as the (significant) Coveware Q3 ransomware report.

As I said to the attendees, I am not a technical expert and no substitute for one, but those of us outside of IT and IT security who work in this space (along with the predominantly non-technical management teams we serve) must engage with the key technical concepts underpinning IT security if we are to succeed at cyber defence.

I’ll do an updated version next week at Saskatchewan Connections next week. Join us!

The role of legal counsel in ransomware response – cyber divergence on display

Two publications released earlier this month illustrate different views on how to structure ransomware response, and in particular on how to structure the involvement of legal counsel.

On Wednesday of last week, the Ontario Ministry of Government Services issued a bulletin entitled “What is Ransomware and How to Prevent Ransomware Attacks” to the broader public sector. It features a preparation and response playbook that will be much appreciated by the hospitals, universities, colleges, school boards and municipalities targeted by the MGS.

The playbook treats ransomware response as primarily a technical problem – i.e., a problem about restoration of IT services. Legal counsel is mentioned in a statement about incident preparation, but is assigned no role in the heart of the response process. Indeed, the MGS suggests that the Information and Privacy Commissioner/Ontario is the source of advice, even “early on” in an incident:

If you are unable to rule out whether or not PII was compromised (which will likely be the case early on in an incident), contact the Privacy Commissioner of Ontario (416) 326-3333.

Contrast this with what Coveware says in its very significant Q3 ransomware trends report that it released on November 4th. Coveware – arguably the best source of ransomware data – explains that data exfiltration threats now feature in 50% of ransomware incidents and that ransom payments are a poor (and becoming poorer) method of preventing threat actors from leaking what they take. Coveware says:

Accordingly, we strongly advise all victims of data exfiltration to take the hard, but responsible steps. Those include getting the advice of competent privacy attorneys, performing an investigation into what data was taken, and performing the necessary notifications that result from that investigation and counsel.  Paying a threat actor does not discharge any of the above, and given the outcomes that we have recently seen, paying a threat actor not to leak stolen data provides almost no benefit to the victim. There may be other reasons to consider, such as brand damage or longer term liability, and all considerations should be made before a strategy is set.

The Coveware view, shared by Canadian cyber-insurers, is that ransomware is primarily a legal and reputational problem, with significant downside legal risks for institutions who do not engage early with legal counsel.

I favor this latter view, and will say quite clearly that it is bad practice to call a privacy regulator about a potentially significant privacy problem before calling a privacy lawyer. A regulator is not an advisor in this context.

This is not a position I take out of self-interest, nor do I believe that lawyers should always be engaged to coordinate incident response. As I’ve argued, the routine use of lawyers as incident coordinators can create problems in claiming privilege when lawyer engagement truly is for the “dominant purpose of existing or anticipated litigation.” My point is that ransomware attacks, especially how they are trending, leave institutions in a legal minefield. Institutions – though they may not know it – have a deep need to involve trusted counsel from the very start.

Developmental service agency not a health information custodian

On October 29th, the Information and Privacy Commissioner/Ontario held that an organization operating as service agency under the Services and Supports to Promote the Social Inclusion of Persons with Developmental Disabilities Act is not a health information custodian under the Personal Health Information Protection Act.

The issue of the organization’s status came up in an appeal of its access decision. The organization acted as if subject to PHIPA, but the adjudicator raised its status as a preliminary issue, and ultimately held that PHIPA did not govern the request because the organization was not providing a service for community health “whose primary purpose is the provision of ‘health care’.”

Although the organization both handles medical information in providing its services and contributes to the enhancement of individual health, the IPC held that its primary role is the coordination of service and not the provision of health care. It explained:

[34]      In my view, what is common to each of the six services offered by SCS is SCS’ role as a coordinator for, or link to, a wide range of services offered by third parties to individuals with developmental disabilities and/or autism. It is a role of coordination between these individuals (or their family members) and third-party services, which may include assessing each individual’s needs and/or preferences, and matching them to various types of programs in the community. The effect of the individuals’ participation in those third-party programs may well be that it enhances their health, but that does not transform SCS’ role into one that can be described as having a primary purpose of providing health care. In my view, it would be too broad a reading of “health care” to find that SCS’ primary purpose is the provision of health care.

[35]      It is true that SCS serves members of the community who have health challenges. The complainant states that these individuals “have other health issues including mental and neurological diagnoses, speech-language impairments and complex health needs often requiring 24 hours supervision.” However, the fact SCS’ client base has health challenges does not mean that SCS’ primary purpose is the delivery of health care. With respect to the status of third party entities to whom SCS refers for services, I am not satisfied that their status is relevant to the question of whether SCS itself is a HIC. Assuming, without deciding, that at least some of those third party entities are HICs under PHIPA, that does not mean that SCS itself, as a coordinating agency, is a HIC.

This is a good reminder that organizations do not become health information custodians merely by handling medical information or by employing regulated health professionals. They must engage in the provision of “health care,” which the IPC has defined narrowly in this decision and others.

Service Coordination Support (Re), 2020 CanLII 85021 (ON IPC).

Three (literal) highlights from the IPC Ontario submission

If Ontario follows through with its commitment to enact privacy legislation, the IPC/Ontario will break from her current constraints to become a privacy regulator with global relevance. We ought to listen carefully to what she is saying about reform and build a strong sense as to how she is inclined.

On October 16th, Commissioner Kosseim filed her submission to the province. It is detailed, thoughtful and strikingly moderate. It has no talk of the concept of “fundamental human rights” that has drawn the attention of the federal commissioner. Rather, the Commissioner says that balancing privacy rights with legitimate business needs is a “virtue.”

Read the submission yourself, but here are the three parts of it that I highlighted in my own read.

First, the Commissioner says we need to reframe the role of consent and develop more principled exceptions, but consent should still be at the top of the hierarchy of the bases for processing:

Some might propose that the solution lies in a GDPR-like architecture by adopting multiple grounds for lawful processing of data, whereby consent is only one such ground on the same and equal footing as other alternative bases. However, we believe that non-governmental organizations should first be required to consider whether they can obtain meaningful consent and stand ready – if asked – to demonstrate why they cannot or should not do so before turning to permissible exceptions for processing. This approach would be more in keeping with Ontario values that promote individual autonomy and respect consumer choice. Whenever it is reasonable, appropriate, and practicable for people to decide for themselves, they should be given the opportunity to do so.

Second, the Commissioner is clearly interested in AI and its implications and clearly sees value in fostering data-driven innovation, though does not propose any solutions, calling the handling of data-driven innovation “the most challenging piece to get right in any new private sector privacy law.” Here’s my highlight on this issue:

While Purpose Specification, Consent, and Collection Limitation continue to be relevant principles, a more modern private sector privacy law would need to reconsider the weight ascribed to them relative to other principles in certain circumstances. For example, in an era of artificial intelligence and advanced data analytics, organizations must rely on enormous volumes of data, which runs directly counter to collection limitation. Data are obtained, observed, inferred, and/or created from many sources other than the individual, rendering individual consent less practicable than it once was. The very object of these advanced data processes is to discover the unknown, identify patterns and derive insights that cannot be anticipated, let alone described at the outset, making highly detailed purpose specification virtually impossible.

Finally, nobody should underestimate the significance of the potential for Ontario employers to become regulated in respect of their employees. On this issue, the Commissioner’s position is clear:

Individuals should have the ability to perform their jobs with the confidence that their employer will keep them safe, while also respecting their privacy rights. Accordingly, we recommend that any private sector privacy law in Ontario should apply to all employee personal information to fill this glaring gap in privacy protection.

IPC Comments on the Ontario Government’s Discussion Paper, IPC/Ontario, 16 October 2020.

Understanding the Employment-Related Records Exclusion

Here is a copy of the presentation I delivered yesterday at the at the PISCC’s 2020 Ontario Connections Conference. As I told the audience, I’m a confessed FOI nerd. The exclusion is such a unique, important and misunderstood part of our Ontario FOI law that it was good to dive deep on it while in good company.

ALSO, BLG is launching a new webinar series for the provincial public sector called “nuts and bolts.” The first webinar will run in late November, please sign up here, or if you can’t attend in November and want me to put you on our mailing list please DM me.

DFS report shows how to double down on remote access security

On October 15th, the New York State Department of Financial Services issued a report on the June 2020 cybersecurity incident in which a 17-year old hacker his friends gained access to Twitter’s account management tools and hijacked over 100 accounts.

The report stresses the critical risk against which social media companies employ their security measures and the simplicity of the hacker’s methods. The DFS raises the link between social media account security and election security and also notes that the S&P500 lost $135.5 billion in value in 2013 when hackers tweeted false information from the Associated Press’s Twitter account. Despite this risk, the 2020 hackers gained access based on a well-executed but simple social engineering campaign, without the aide of malware, exploits or backdoors.

The hackers conducted intelligence. They impersonated the Twitter IT department and called employees to help with VPN problems, which were prevalent following Twitter’s shift to remote work. The hackers directed employees to a fake login page, which allowed them to capture credentials and circumvent multifactor authentication.

The event lasted about 24 hours. The DFS explains that Twitter employed a password re-set protocol that required every employee to attend a video conference with a supervisor and manually change their passwords.

The event and the report are about the remote workforce risk we face today. Twitter had all the components of a good defence in place, but according to the DFS it could have done better given the high consequences of a failure. Here is a summary of some of the DFS recommendations:

  • Employ stricter privilege limitations, with access being re-certified regularly. Following the incident Twitter did just this, even though it apparently slowed down some job functions.
  • While multifactor authentication is a given, the DFS noted, “Another possible control for high-risk functions is to require certification or approval by a second employee before the action can be taken.”
  • The DFS points out that not all multifactor authentication is created equal: “The most secure form of MFA is a physical security key, or hardware MFA, involving a USB key that is plugged into a computer to authenticate users.”
  • The DFS says organizations should establish uniform standards of communications and educate employees about them. Employees should know, for example, exactly how the organization will contact them about suspicious account activity.
  • The DFS endorses “robust” monitoring via security information and event management systems – monitoring in “near real-time.”

These recommendations could make for very strong remote access and account security, but are worth note.

Report on Investigation of Twitter’s July 15, 2020 Cybersecurity Incident and the Implications for Election Security.

OPC issues significant findings in response to online reputation complaint

The IPC recently responded to a complaint by a dentist about the the RateMDs review site, at which several individuals purporting to be her patients had posted anonymous reviews. The OPC findings are significant favor the public’s right of expression over doctors’ interest in personal privacy.

The OPC first held that RateMDs did not need the complainant’s consent to publish the reviews because the reviews constituted so-called “mixed personal information” – a term used by the IPC/Ontario to refer to personal information that relates to more than one individual. The Federal Court of Appeal test from Pirrie calls for a very contextual balancing of interests in addressing access requests for such information. In this case, the OPC applied a similar approach to deny the complainant the ability to block the publication of others’ opinions about her. It said:

Giving effect to the Complainant’s lack of consent would mean the interests of the patients who are consenting to the publication of their reviews and ratings would not be respected, and the benefits to the public more broadly would be negated. We are therefore of the view, based on a balancing of interests of the Complainant with those of the reviewers and the public more generally, that this aspect of the complaint is not well-founded.

The OPC held that RateMDs’ accuracy and correction obligations under PIPEDA require it to correct ratings that are inaccurate, incomplete or out-of-date. However, it also acknowledged that challenging the inaccuracy of an anonymous review is difficult and held that that PIPEDA will “generally” prohibit review sites like RateMDs from disclosing the identity of anonymous reviewers.

Finally, OPC held, that RateMDs should discontinue a paid service that allowed doctors to hide up to three reviews “deemed to be suspicious.” While this finding is understandable, it is ironic that a privacy regulator has applied our commercial privacy statute to take away a potential privacy remedy. All in all, that is what this finding does: it makes clear that PIPEDA is not an effective remedy for challenging seemingly fair reviews posted on a bona fide review site. Those aggrieved must go to court and sue in defamation or (if they are up for a challenge) breach of privacy.

PIPEDA Report of Findings #2020-002, June 30, 2020.

Privacy violation arises out of failure to notify of FOI request

On September 21st, the Information and Privacy Commissioner/Ontario held that a municipality breached the Municipal Freedom of Information and Protection of Privacy Act by failing to notify an affected person of an FOI request.

The complainant discovered that the municipality had released e-mails he had sent to councilors about a planning matter in responding to FOI requests and without providing notice. MFIPPA requires notification of a request for records containing personal information if the head has “reason to believe” their release “might constitute an unjustified invasion of personal privacy.”

The IPC held that the municipality had not met this requirement. It reasoned:

As indicated above, the County disclosed the complainant’s name, address and views and opinions about Hastings Drive without notifying him pursuant to section 21(1)(b). Given the nature of the complainant’s personal information at issue, in my view, the disclosure of at least some of this information might have constituted an unjustified invasion of his personal privacy.

In my view, the complainant should have been notified and given an opportunity to make representations as to why the Emails should not have been disclosed. As noted in Investigation Report MC-000019-1, except in the clearest of cases, fairness requires that the person with the greatest interest in the information, that is, the complainant, be given a chance to be heard. In this matter, he was not given that opportunity.

The complainant had sent his e-mails to politicians about a matter of apparent public interest. The standard for notification is low, but the notice requirement here was at least debatable.

Unfortunately, the IPC does not address the balancing of interests contemplated by the unjustified invasion exemption. For notice to be required there must be “a reason to believe” – a reason based on a provisional application of the unjustified invasion exemption. “Clearest of cases” is not the legal test, and it is wrong to notify simply because “at least some” information responsive to a request is bound to trigger the notification requirement.

This is a mild warning to institutions. There is a statutory immunity that offers some protection from civil claims for failure to notify, but the IPC has shown itself to be strict.

PRIVACY COMPLAINT MC17-35, 2020 CanLII 72822 (ON IPC).

UK Court of Appeal causes re-set for facial recognition surveillance

On September 11th, the England and Wales Court of Appeal held that the South Wales Police Force violated Article 8 of the European Convention on Human Rights and the UK Equality Act 2018 by using facial recognition software on two occasions. The finding is narrow, though, and leaves facial recognition technology open to police use.

The police piloted facial recognition technology on two occasions. They were governed by the Data Protection Act 2018, a surveillance “code of practice” issued under the protection of Freedoms Act 2012 and written local police policy. The police also conducted a data protection impact assessment and a (somewhat limited) equality impact assessment.

The police conducted overt facial recognition surveillance under this framework based on pre-deployment notice made, in part, via advertising and via notices posted on the police cars equipped with facial recognition cameras. On two occasions the police collected images for an entire day and matched the images against images in “watch lists” comprised of persons wanted on warrants, persons identified as suspects and other persons of interest. The police used human validation to screen matches, which led them to make two arrests on one occasion and no arrests on another. Significantly, the police immediately disposed of images of all persons who did not match.

The Court found the deployment to have been unlawful based on two problems, both problems of process rather than fundamental problems.

First, the Court held that the deployments were not sufficiently prescribed by law to justify an infringement of Article 8 (which protects the right to privacy). More specifically, it held that the legal framework for the deployments left too much discretion to the police as to who may be placed on a watch list, in particular for intelligence gathering purposes. The police failure to reckon with this aspect of the technology and surveillance program also led the Court to conclude that its data privacy impact assessment was inadequate.

Second, the Court held that the police did not conduct an adequate equality impact assessment, which it held requires “the taking of reasonable steps to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex.” The police ought to have, the Court said, assessed the facial recognition software to determine if it resulted in “unacceptable bias,” even if human validation was to be a feature of the matching process.

Notably, the Court held (in obiter) that the police infringement of Article 8 rights was justifiable in regards to the relative consequences and benefits of the surveillance scheme, calling the impact on Article 8 rights “negligible.”

As noted, this leaves facial recognition technology open to police use in the UK. Use for intelligence gathering purposes may be more questionable than use for investigatory purposes.

Bridges, R (On the Application Of) v South Wales Police [2020] EWCA Civ 1058 (11 August 2020).

BCCA denies access to total costs spent on a litigation matter

On August 21st, the Court of Appeal for British Columbia held that a requester had not rebutted the presumption of privilege that applied to the total amount spent by government in an ongoing legal dispute. 

The Court first held that the presumptive privilege for total legal costs recognized by the Supreme Court of Canada in Maranda v Richer applies in the civil context. Then, in finding the requester had not rebutted the privilege, the Court engaged in detailed discussion about how the timing of the request and the surrounding context will weigh in the analysis.

The Court’s analysis is as complex as it is lengthy. Ultimately, the outcome rested most heavily on (a) the timing of the request (early into trial), (b) the identity of the requester (who was a party) and (c) the degree of information about the matter available to the public (which was high). The Court felt these factors supported the making of strong enough inferences about confidential solicitor-client communications that sustaining privilege was warranted.

More generally, the decision stresses the presumption of privilege and associated onus of proof. Despite Maranda, it is easy to think that total legal fees spent on matter are accessible subject to the privilege holder’s burden of justification. Precisely the opposite is true.

British Columbia (Attorney General) v. Canadian Constitution Foundation, 2020 BCCA 238 (CanLII).