On November 27th, the Saskatchewan Information and Privacy Commissioner faulted the Saskatchewan Legal Aid Commission for failing to have and maintain a clean desk policy – i.e., a policy requiring files to be put away and locked overnight – given cleaning staff had unsupervised after hours access to its office. The IPC relied on the Commission’s own policy, which encouraged but did not mandate clean desks. The matter came to the IPC’s attention after cleaning staff left two layers of doors open one night.
Lee Akazaki of Gilbertson Davis has written a fascinating article in the most recent Advocates’ Quarterly (vol 50) about a privacy-related development that he argues is interfering with the just and expeditious handling of no-fault motor vehicle claims in Ontario.
Mr. Akazaki explains that plaintiff counsel are using privacy claims to frustrate the mandatory independent medical examinations required by section 44 of the Statutory Accident Benefit Schedule:
In about 2016, assessors began to receive cryptic letters from the clients of lawyers retained to advance injury claims. The letters invariably followed a template containing a client’s name, the lawyer’s office address, and the client’s signature. Written in “legalese,” the letters demanded that s. 44 assessors account directly to the claimant for various documents and procedures in the IME process. The letters stated the recipients were not to disclose the requests to insurers or other parties, thus leaving assessors worried that reprisals would follow if they alerted insures or their agents. They also demanded responses within 30 days, failing which the assessor was liable to face a complaint to the Office of the Privacy Commissioner (OPC) under the Canadian federal Personal Information Protection and Electronic Documents Act (PIPEDA). The letters exploited ostensible conflicts among health care standards, privacy litigation and SABS.
This is shocking, especially given PIPEDA does not apply. On this point I agree with Mr. Akazaki, though my reasoning differs. This is entirely analogous to the State Farm case, in which the Federal Court held that the gathering of evidence to resolve a civil dispute in a province was not subject to PIPEDA because it is not, in its essence “commercial activity.” It is unlike Wyndowe, an earlier case in which the Federal Court held that an independent medical examination under a disability insurance contract was subject to PIPEDA.
Mr. Akazaki argues that Wyndowe is distinct because the SABS regime (unlike typical contractual examination rights under disability insurance contracts) treats the (provisional) denial of benefits as a condition precedent to an examination. Okay, but I think the point of distinction is likely more fundamental. It strikes me – and I am not an insurance lawyer! – that the SABS regime is a public regime for resolving civil disputes in the province and is, in part at least, a means of keeping litigation out of court. It’s the public character of the SABS dispute resolution regime that ought to provide it with a form of immunization from PIPEDA. It’s what makes the “primary characterization of the activity” (to use the State Farm test words) something other than commercial.
And a word to the wise, never assume that a privacy statute applies. Application issues abound, and litigating them is where all the fun is at!
On November 26th, the Office of the Privacy Commissioner of Canada (the OPC) and the Office of the Information and Privacy Commissioner of British Columbia (the OIPC) jointly held that online marketing company AggregateIQ Data Services breached Canadian privacy law in providing services that supported online targeted political advertising for two campaigns said to be instrumental to the 2016 “Brexit” referendum.
AggregateIQ is a small BC company that provided services to SCL Elections, the UK domiciled parent of Cambridge Analytica – infamous for harvesting data from millions of Facebook users without consent. The joint report is quite vague about whether AggregateIQ used the Cambridge Analytica contraband, and AggregateIQ denies it.
The central finding in the joint report is that information about one’s “political leanings or affiliations” is “sensitive” personal information. Accordingly, the commissioners said, authorization given by an individual to communicate via e-mail was not sufficient to authorize micro-targeting via Facebook (and its “custom audiences” service). They explained:
AIQ asserted that Facebook is simply another way of communicating with individuals, not materially different from email. We respectfully disagree. In our view, an individual who had initially provided their email address for purposes of being kept “up to date” or providing “opportunities to engage” with a campaign may expect to be contacted via email. They would not expect their email address to be used and disclosed to a social media company for advertising on their platform or any other unknown purposes… Accordingly, AIQ had the responsibility, under PIPA and PIPEDA, to ensure that it was relying on express consent for the work it was performing on behalf of Vote Leave.
This finding is about political ad targeting, a problem of great societal importance but of minimal relevance to most organizations. The report also includes two findings of broad relevance.
First, the fact the commissioners took jurisdiction over a matter involving service provided to a foreign political party is is important. The commissioners explained:
To be clear, we are not finding, in this section or below, that AIQ’s foreign clients were required to comply with Canadian and BC privacy laws. The practices of those political organizations would generally fall outside the scope of PIPA and PIPEDA, and in any event, were not the subject of this investigation. That said, to the extent that AIQ wished to rely on the consent obtained by those foreign clients for its own collection, use, and disclosure of personal information on their behalf, it would need to ensure that such consent was sufficient, under Canadian or BC law as the case may be, for its purposes.
This quote speaks to a time-old question about whether the use of a commercial service provider can cause activity that would otherwise not be regulated to become subject to Canadian privacy law. This question is of particular concern to provincially regulated employers, who are not subject to federal privacy legislation in respect of employment but often use service providers to help with employment administration.
The Federal Court’s decision in State Farm suggests that the mere engagement of a service provider is not enough to give rise to application (and enforcement jurisdiction). When State Farm was released in 2010 I asked a contact from the OPC how it would interpret State Farm. The answer was, “narrowly,” exactly as the above quote would suggest!
Second, the commissioners make a significant finding about the use of identifying information with Facebook’s “lookalike audience” service – a popular and powerful ad targeting service that involves uploading identifying information about a population (typically of customers) so Facebook can identify and target a lookalike population (of potential customers). The commissioners held that authorization to use personal information for “engagement” purposes was not authorization for “data analytics” purposes, stating:
It is also the case that the privacy notice largely speaks to collecting and using personal information for the purpose of engaging supporters in the campaign and performing services on their behalf. The disclosure of individuals’ personal information to Facebook for data analytics, via its “lookalike audience” feature, does not achieve or relate to either of those objectives. Instead, this disclosure is made to allow Facebook to link supporters to their Facebook profiles and analyze those profiles in order to identify, target, and persuade other similar or like-minded individuals. This is for Vote Leave’s benefit and can certainly not be viewed, in any way, as performing a service on behalf of the voter whose information was processed.
Organizations using the Facebook lookalike service or similar services should pay heed and revisit their source of authorization. More broadly, this quote speaks to the HUGE question, “Is using data to generate insights about a population a use of personal information at all?” The quote suggests the commissioners say “yes,” though there is a reference to data matching done by Facebook that may render this scenario unique. (I like to believe the answer is “no.”)
On October 9th, Justice McHaffie of the Federal Court held that firearm serial numbers, on their own, are not personal information. His ratio is nicely stated in paragraphs 1 and 2, as follows:
Information that relates to an object rather than a person, such as the firearm serial numbers at issue in this case, is not by itself generally consideredpersonal information”since it is not information about an identifiable individual. However, such information may still be personal information exempt from disclosure under the Access to Information Act, RSC 1985, c A-1 [ATIA] if there is a serious possibility that the information could be used to identify an individual, either on its own or when combined with other available information.
The assessment of whether information could be used to identify an individual is necessarily fact-driven and context-specific. Theother available informationrelevant to the inquiry will depend on the nature of the information being considered for release. It will include information that is generally publicly available. Depending on the circumstances, it may also include information available to only a segment of the public. However, it will not typically include information that is only in the hands of government, given the purposes of both the ATIA and the personal information exemption.
This is not a bright line test, though Justice McHaffie did say that the threshold should be more privacy protective than if the “otherwise available information” requirement was limited to publicly available information or even information available to “an informed and knowledgeable member of the public.”
I’d encourage you to read David Fraser’s blog post from last weekend – The value of legal privilege: Your diligent privacy consultant may become your worst enemy.
David’s basic point is sound: structuring a security or privacy expert retainer to support a privilege claim can prevent your own expert’s advice from being used against you. Most often this is done by having legal counsel retain an expert in anticipation of litigation and for the dominant purpose of litigation, with instructions and conclusions going strictly between counsel and expert.
David explains a scenario in which an organization retained an expert to advise on some form of due diligence connected to a subsequent security incident. The expert was apparently quite candid in its written advice, outlining a security problem that amounted to what David compares to a “dumpster fire.” The organization responded partly but not wholly to the expert’s recommendations. That expert’s report will therefore become, as David says, the plaintiff’s Exhibit A.
Being faced with your own expert’s advice is very bad, hence the soundness of David’s point. My additional point: legal privilege is no solution to a bad client-counsel-expert relationship.
The views on what is a reasonable investigation or remediation in the data security context can vary widely between equally qualified experts. Too often, perhaps driven by conflicting interests, security experts recommend what’s possible and rather than what is “due.” A breach coach can help address this problem, identifying trusted experts and working with them to reach a shared and acceptable understanding of the due diligence required in responding to a security incident. With such a relationship, departing from an expert’s recommendations (even though they are privileged) represents a real and meaningful risk. The facts – i.e., the things done based on an expert’s recommendations – are never privileged. If litigation ensues those facts will be picked apart by other experts, and you want the good ones to view the facts the same way as you and your trusted advisor.
Experts that are prone to floating long lists of options need to be retained under privilege because they are dangerous, but even under privilege their advice is worth little. The prescription: do everything you can to build a great client-counsel-expert relationship. Use a breach coach. Keep a roster of trusted experts on retainer. Don’t use experts retained for due diligence advice to do the very remedial work they recommend.
As reported widely, yesterday the Court of Appeal for Ontario affirmed an IPC/Ontario finding that gross revenue earned by Ontario’s top earning doctors was not their personal information.
There’s not much to the decision. (A number of the grounds for appeal were “optimistic.”) The decision illustrates that information must reveal something of a personal nature about an individual (in the relevant context) to be the individual’s personal information. In the doctors’ case, the link between gross income and the personal finances was not strong, as noted by the Court:
The information sought was the affected physicians’ gross revenue before allowable business expenses such as office, personnel, lab equipment, facility and hospital expenses. The evidence before the Adjudicator indicated, however, that, in the case of these 100 top billing physicians, those expenses were variable and considerable.
In another context, gross revenue information could be personal information. What is and is not personal information is a VERY contextual matter.
On Friday, the Office of the Privacy Commissioner of Canada issued a new position on the protection of online reputation. In doing so the OPC recognized a right to have personal information de-indexed from search engine results if it is inaccurate, incomplete or out-of-date. Although the position is in draft, is nonetheless of critical significance to Canadians’ use of the internet – creating a broader variant of the so-called European “right to be forgotten.”
The OPC says the right arises out of two longstanding parts of the Personal Information Protection and Electronic Documents Act – Principle 4.6 and section 5(3).
Principle 4.6 is the accuracy principle. It reads as follows:
4.6 Principle 6 — Accuracy
Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.
The extent to which personal information shall be accurate, complete, and up-to-date will depend upon the use of the information, taking into account the interests of the individual. Information shall be sufficiently accurate, complete, and up-to-date to minimize the possibility that inappropriate information may be used to make a decision about the individual.
An organization shall not routinely update personal information, unless such a process is necessary to fulfil the purposes for which the information was collected.
Personal information that is used on an ongoing basis, including information that is disclosed to third parties, should generally be accurate and up-to-date, unless limits to the requirement for accuracy are clearly set out.
Principle 4.6 dovetails in part with Principle 4.9, which requires organizations to “amend” personal information if it is demonstrably “inaccurate or incomplete.” (Principle 4.9 does not mention currency.)
The OPC’s reasoning is simple. Search engines use and disclose personal information to “provide people with access to relevant information from the most reliable sources available.” This purpose is not served by presenting search results that are not accurate, complete or up-to-date. Though accuracy, completeness and currency are they key concepts, the OPC says that search engines should interpret and apply them in light of the how materially the impugned content affects individuals’ interests and the countervailing (public) interest in continued accessibility.
Section 5(3) of PIPEDA restricts organizations to handling personal information for purposes that a “reasonable person would consider are appropriate under the circumstances.” The OPC says that section 5(3) could also be the basis of a valid de-indexing request, giving the following two examples:
- Where content is unlawful, or unlawfully published (e.g. where it contravenes a publication ban, is defamatory, or violates copyright; etc.)
- Where the accessibility of the information may cause significant harm to the individual, and there is either no public interest associated with the display of the search result, or the harm, considering its magnitude and likelihood of occurrence, outweighs any public interest
This newly-recognized right invites de-indexing requests to search engines as the primary means of obtaining relief from online reputational harm, though the OPC has also recognized a right to take down content. The right to take down content is a more limited right, in part because the OPC only has jurisdiction over those who publish personal information “in the course of commercial activity.”
The significance of the new position cannot be understated; there are many Canadians who feel plagued by internet posts that are unflattering if not disparaging. Search engines will not embrace this development – leaving a possibility of an enforcement dispute (and Federal Court input) and vigorous lobbying for a legislative amendment. It may take some time, but watch for a Charter challenge.
You can read the draft report here.
When an employer confronts an employee with an allegation of improper access to personal information, it is important to give the employee the event log data that proves the allegation. It may often be voluminous and difficult to interpret, but presenting a general allegation or summarizing events without particulars will give the employee a good reason to deny the allegation.
This is what happened in this very illustrative British Columbia case in which an arbitrator held he could not infer dishonesty from the grievor’s initial failure to admit wrongdoing because the grievor had not been given log data. Also, if an employee continues to deny responsibility, log data can be difficult to rely upon; even if it can be established to be authentic, there are issues about presenting log data in a meaningful and privacy-protective way. An early admission can go a long way.
On November 28th the Nova Scotia Court of Appeal held that the Nova Scotia Workers’ Compensation Appeals Tribunal erred by ordering the disclosure of a worker’s entire file without redaction.
The matter was about a workplace safety insurance claim, and particularly whether a worker’s condition was caused by his work. The Tribunal made the order in response to an employer’s objection to various redactions made to a set of records in the possession of the Workers Compensation Board. Although the employer argued the redacted information was relevant, the Tribunal ordered the unredacted file to be produced because it lacked the resources to vet for relevance, because fairness and the “ebb and flow” of a hearing supported full disclosure and because of the difficulty in making relevance determinations.
Despite the obvious appearance of laziness, the Tribunal framed its decision as rooted in procedural fairness. In response, the Court said: “…there is no principle of procedural fairness… that a litigant who requests disclosure is entitled to see every document it requests, regardless of relevance and without a relevance ruling by an impartial arbiter.”
Implicit in this statement is a concern for the worker’s privacy interest. The Tribunal had recognized this interest in a policy manual that it disregarded in making its order, though there are aspects of the Court’s reasoning that suggest a more broadly based right to redaction.
The Court gave this guidance on how to vet for relevance:
The person who vets for relevance must keep in mind that material should be disclosed for its connection to the “proposition[s] being advanced” by the parties, to borrow Justice Rothstein’s phrase, and not merely to justify an anticipated conclusion on the merits of those propositions. The vetting official may not be able to foretell precisely how the evidence will be martialed. So the ambit of disclosure should allow the parties some elbow room to strategize for the engagement.
Who is the “health information custodian” when an institution with an educational mandate provides health care? PHIPA gives institutions choice. Here’s a presentation I gave yesterday in which I argue that the institution (and not its employed practitioners) should assume the role of the HIC. Also includes some simple content on the new PHIPA breach notification amendment.