Recent cyber presentations

Teaching is the best way of learning for some, including me. Here are two recent cyber security presentations that may be of interest:

  • A presentation from last month on “the law of information” that I delivered to participants in the the Osgoode PDP program on cyber security
  • Last week’s presentation for school boards – Critical Issues in School Board Cyber Security

If you have questions please get in touch!

Where’s that workplace surveillance bill? More thoughts pending its release

It’s Friday at 4:20pm and I don’t see an Ontario workplace surveillance bill yet, so here are a couple more thoughts – one positive, one negative and one neutral.

Positive – Organizations ought to employ “information technology asset management” – a process for governing their network hardware and software. Those organizations with strong asset management practices will have little difficulty identifying how employees are “monitored.” For those who are weak asset managers, the new bill is an invitation to improvement and rooting out unmanaged applications.

Negative – As I said yesterday, the devil will be in the detail, and the scope of the “monitoring” that is regulated will be key. Monitoring must be defined in a way that does not affect non-routine processes – i.e., audits and investigations. Those raise a different kind of privacy concern, and a notification requirement shouldn’t frustrate an organization’s ability to investigate.

Neutral – Organizations typically keep security controls confidential to protect against behavior we call “threat shifting” – the shifting of tactics to circumvent existing, known controls. I’m doubtful the type of disclosure the bill will require will create a security risk, but it’s an issue to consider when we see the text.

Bring on the bill!

Cyber class action claims at an inflection point

Yesterday, I happily gave a good news presentation on cyber claims legal developments to an audience of insurance defence lawyers and professionals at the Canadian Insurance Claims Managers Association – Canadian Independent Adjusters’ Association – Canadian Defence Lawyers joint session.

It was good news because we’ve had some recent case law developments create legal constraints on pursuing various common claims scenarios, namely:

  • The lost computer, bag or other physical receptacle scenario – always most benign, with notification alone unlikely to give rise to compensable harm, a trial judgement looking positively at a one year credit monitoring offer and proof of causation of actual fraud a long shot at best
  • The malicious outsider scenario – for the time being looking like it will not give rise to moral damages that flow from an intentional wrong (though this will be the subject of an Court of Appeal for Ontario hearing soon in Owsianik)
  • The malicious insider scenario – partly addressed by a rather assertive Justice Perell finding in Thompson

We’re far from done yet, but as I say in the slides below, we’re at the early stages of an inflection point. I also give my cynical and protective practical advice – given the provable harms in the above scenarios flow mainly from the act of notification itself, notify based on a very strong analysis of the facts and evidence; never notify because there’s a speculative risk of unauthorized access or theft​. Never a bad point to stress.

Cyber security for the regulator and regulated

On Monday I addressed an audience a the Ontario Regulatory Authorities continuing professional development conference on the topic of cybersecurity. It was a good chance to record an updated and concise view of the Canadian threat environment along with the cyber defence and incident response issues facing Canadian organizations. Here are the slides for your reading pleasure.

Cybersecurity governance and the empowerment of corporate leadership

I had the honour of presenting on cybersecurity oversight today at the Association of Workers’ Compensation Boards of Canada annual Governance Summit. The theme ended up being about leadership and empowerment. I’d like board members to believe that the information security knowledge they require to meet their duties is well within their grasp and to feel a little excited about the learning process. Slides below FYI.

Manitoba Ombudsman blesses response to e-mail incident

Manitoba Ombudsman Jill Perron has issued her report into Manitoba Families’ 2020 e-mail incident. The incident involved the inadvertent e-mailing of personal health information belonging to 8,900 children in receipt of disability services to approximately 100 external agencies and community advocates. It is such a common incident that it is worth outlining the Ombudsman’s incident response findings.

Manitoba Families meant to transfer the information to the Manitoba Advocate for Children and Youth to support a program review. It included information about services received. Some records included diagnoses.

Manitoba Families mistakenly blind copied the external agencies and advocates on an e-mail that included the information in an encrypted file and a follow-up e-mail that included the password to the file. It had made the same mistake about a week earlier. Several agencies alerted Manitoba Families to its error, and it began containment within a half hour.

The Ombudsman held that Manitoba Families’ containment effort was reasonable. She described it as follows.

Attempts at recalling the email began minutes later at 8:29 a.m. and continued at various intervals. Also, at 8:35 a.m., CDS sent an email to all unintended recipients noting in bold that they were incorrectly included on a confidential email from Children’s disAbility Services and requested immediate deletion of the email and any attachments. Follow up calls to the unintended recipients by CDS program staff began to occur that morning to request deletion of the emails and a list was created to track these calls and the outcomes. A communication outline was created for these calls which included a request to delete emails, a further request that emails be deleted from the deleted folder and that any emails that went to a junk email folder also be deleted…

In January 2021, we received additional written communication from the program stating that all agency service providers and advocates were contacted and verified deletion of the personal health information received in error. The log form created to track and monitor the name of the organization, the date and details of the contact was provided to our office.

The Ombudsman reached a similar finding regarding Manitoba Families’ notification effort, though she needed to recommend that Manitoba Families identify the agencies and advocates to affected individuals, which Manitoba Families agreed to do upon request.

What’s most significant – especially given class action proceedings have been commenced – is a point the Ombudsman made about evidence that Manitoba Families appears not to have gathered.

In addition to assuring families about the deletion of the email, additional information such as who viewed the email, if the attachment was opened and read, whether it was forwarded to anyone else or printed, whether it was stored in any other network drive or paper file or, conversely, that no records exist – can be helpful information to provide those affected by a privacy breach. It is best practice, therefore, to provide families with as much assurance as possible about the security of their child’s health information.

The question is, what is one to make of an arguable shortcoming in an incident response investigation? I say “arguable” because the probability of any of these actions occurring is very low in the unique circumstances of this incident, which involved trusted individuals receiving a password-protected and encrypted file. Manitoba Families ought to have collected this evidence because they called the e-mail recipients anyway, it is helpful and was probably available for collection. If it did not do so, however, I believe it is perfectly acceptable to for Manitoba Families to stand by the scope of a narrower investigation and and put the plaintiff to proof.

PHIA Case 2020-1304

The Five Whys, the discomfort of root cause analysis and the discipline of incident response

Here is a non-law post to pass on some ideas about root cause analysis, The Five Whys, and incident response.

This is inspired by having finished reading The Lean Startup by Eric Ries. It’s a good book end-to-end, but Ries’ chapter on adaptive organizations and The Five Whys was most interesting to me – inspiring even!

The Five Whys is a well-known analytical tool that supports root cause analysis. Taichii Ohno, the father of the Toyota Production System, described it as “the basis of Toyota’s scientific approach.” By asking why a problem has occurred five times – therefore probing five causes deep – Ohno says, “the nature of the problem as well as its solution becomes clear.” Pushing to deeper causes of a failure is plainly important; if only the surface causes of a failure are addressed, the failure is near certain to recur.

Reis, in a book geared to startups, explains how to use The Five Whys as an “automatic speed regulator” in businesses that face failures in driving rapidly to market. The outcome of The Five Whys process, according to Ries, is to make a “proportional” investment in corrections at each five layers of the causal analysis – proportional in relation to to the significance of the problem.

Of course, root cause analysis is part of security incident response. The National Institute of Standards and Technology suggests that taking steps to prevent recurrences is both part of eradication and recovery and the post-incident phase. My own experience is that root cause analysis in incident response is often done poorly – with remedial measures almost always targeted at surface level causes. What I did not understand until reading Ries, is that conducting the kind of good root cause analysis associated with The Five Whys is HARD.

Ries explains that conducting root cause analysis without a strong culture of mutual trust can devolve into The Five Blames. He gives some good tips on how to implement The Five Whys despite this challenge: establishing norms around accepting the first mistake, starting with less than the full analytical process and using a “master” from the executive ranks to sponsor root cause analysis.

From my perspective, I’ll now expect a little less insight out of clients who are in the heat of crises. It may be okay to go a couple levels deep while an incident is still live and while some process owners are not even apprised of the incident – just deep enough to find some meaningful resolutions to communicate to regulators and other stakeholders. It may be okay to tell these stakeholders “we will [also] look into our processes and make appropriate improvements to prevent a recurrence” – text frequently proposed by clients for notification letters and reports.

What clients should do, however is commit to conducting good root cause analysis as part of the post-incident phase:

*Write The Five Whys into your incident response policy.

*Stipulate that a meeting will be held.

*Stipulate that everyone with a share of the problem will be invited.

*Commit to making a proportional investment to address each identified cause.

Ries would lead us to believe that this will be both unenjoyable yet invaluable – good reason to use your incident response policy to help it become part of your organization’s discipline.

Cyber defence basics – Maritime Connections

I was pleased to do a cyber defence basics presentation to privacy professionals attending the Public Service Information Community Connection “Maritime Connections” event yesterday. The presentation (below) is based off of recent publications by the New York Department of Financial Services and the Information Commissioner’s Office (UK) as as the (significant) Coveware Q3 ransomware report.

As I said to the attendees, I am not a technical expert and no substitute for one, but those of us outside of IT and IT security who work in this space (along with the predominantly non-technical management teams we serve) must engage with the key technical concepts underpinning IT security if we are to succeed at cyber defence.

I’ll do an updated version next week at Saskatchewan Connections next week. Join us!

The twelve security failures underscoring the ICO’s recent £500,000 fine

On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):

    • The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
    • The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
    • The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
    • System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
    • Cathay Pacific could not provide evidence of adequate server hardening.
    • Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
    • The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
    • Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
    • Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
    • Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
    • Penetration  testing  was  inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
    • Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.

£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.

Good quotes on the impossibility of “ensuring” security and achieving zero risk

I blogged about Arbitrator Sudykowski’s decision in Providence Health when it was released in 2011 for its ratio – employers are entitled to more than a bare medical certification when an employee is absent from work.

I had occasion to use the case in a matter I argued yesterday, and was pleasantly surprised to re-read what Arbitrator Surdykowski said about data security and the impossibility of “ensuring” data security. The union had made an argument for minimizing the collection of health information that rested on data security risk, to which Mr. Surkyowski replied:

I agree with the Union’s assertion that there is always a possibility that private and confidential medical information may be inadvertently released or used inappropriately.  Try as they might, it is impossible for anyone to absolutely guarantee information security.  All that anyone can do in that respect is the best they can.  There is nothing before me that suggests the extent to which the inadvertent (or intentional) release or misuse of confidential information occurs, either generally or at the workplaces operated by this Employer.  More specifically, there is no indication of how often it happens, if at all, or that best efforts are demonstrably “not good enough”.

In a perfect world, the security and proper use of confidential private medical (or other) information could and would be guaranteed.  But to be perfect the world would have to be populated by perfect human beings.

This is a nice quote to bring forward in this blog, of course, because it’s always a good to remind ourselves (and others) that the mere happening of a security incident doesn’t mean fault!

It’s a hard point to argue when hindsight bears heavily on a decision-maker, but is indisputable. I once defended on employer in a charge that followed a rather serious industrial accident in which an employee at truck dealership was run over by a tractor. The Court of Appeal held that the tractor wasn’t a “vehicle” for the purposes of the Occupational Health and Safety Act and entered an acquittal. In examining the context for this finding Justice Cronk made the same point as Arbitrator Surdykowski:

That said, consideration of the protective purposes of the legislative scheme is not the only consideration when attempting to ascertain the scope of s. 56 of the Regulation. The Act seeks to achieve “a reasonable level of protection” (emphasis added) for workers in the workplace. For obvious reasons, neither the Act nor the Regulation mandate or seek to achieve the impossible — entirely risk-free work environments.

Every security incident is an opportunity to tell a story about pre-incident due diligence that highlights this basic truth. (If your defence rests our horrendously vague privacy law you’re in trouble, I say.) It’s also reason to hold our tongues and not judge organizations who are victimized, at least before learning ALL the facts. Security incidents are complex. Data security is hard.