I had the honour of presenting on cybersecurity oversight today at the Association of Workers’ Compensation Boards of Canada annual Governance Summit. The theme ended up being about leadership and empowerment. I’d like board members to believe that the information security knowledge they require to meet their duties is well within their grasp and to feel a little excited about the learning process. Slides below FYI.
Last autumn, the Ontario government struck an expert panel of cyber advisors. Among other things, it gave the panel a mandate to “assess and identify common and sector-specific cyber security themes and challenges encountered by Broader Public Sector (BPS) agencies and service delivery partners in Ontario.”
The panel got quickly to work, and in late 2020 gathered feedback from panel members and BPS stakeholders to produce an interim report under the name of its Chair, Robert Wong. The interim report is as unsurprising as it is alarming, speaking to wide-ranging maturity levels derived from under-resourcing as well as failures of governance. It includes characterizations of well-understood governance challenges in the university, school board and health care sectors. On universities, for example, the Chair reports:
Even in institutions with relatively strong and mature corporate governance practices, there are still significant challenges to effectively manage cyber security risks that result from competing priorities and inconsistent application of oversight and policies. For example, funding in higher education comes from various sources and is allocated based on various criteria. Some university research groups that have successfully secured grants or private sponsorship dollars often have a sense of entitlement and feel that because it is their money, they get to call the shots and ignore cyber security concerns when they procure technology tools. Why don’t universities impose the same cyber security requirements on their researchers as they do on other faculty and staff?
Notably, the Chair says, “A regional-based shared-services model may be the only viable option for the smaller players to be able to afford and gain access to the limited availability of technical expertise in the marketplace.”
He also makes the following two interim recommendations, one to government and another to BPS entities themselves:
1. That the National Institute of Standards and Technology (NIST) Cybersecurity Framework be endorsed by the Government of Ontario for the Broader Public Sector’s cyber security practices. If an entity has already adopted a cyber security framework other than that of NIST, the expectation is that they map the framework they are using to the NIST framework to ensure alignment and consistency. Understanding that BPS entities vary in size and risk-profile, it is reasonable to expect that the breadth and depth to which the NIST Cybersecurity Framework is implemented will also vary accordingly, following a risk-based approach. To assist small- and medium-sized organizations in adopting and implementing the NIST framework, the Canadian Centre for Cyber Security’s “Baseline Cyber Security Controls for Small and Medium Organizations” is a useful guide that provides the fundamental requirements for an effective cyber security practice that aligns with the NIST framework.
2. That all BPS entities implement a Cyber Security Education and Awareness Training Program. The content of the training materials shall be maintained to ensure currency of information. New employees shall receive the training immediately after joining the company as part of their orientation program, and all existing employees shall receive refresher training on an annual basis, at a minimum. Information Technology and cyber security specialists shall receive regular cyber security technical training to ensure their skills are kept current. Specialized educational materials may be developed that would be appropriate for boards of directors, senior executives and any other key decision-makers. Effective management of cyber security risks requires the efforts and commitment of everyone and cannot simply be delegated to the cyber security professionals. A strong “tone-at-the-top” is a critical success factor to strengthen the cyber security resilience of BPS service delivery partners.
The panel is not a standard setting entity, but the second recommendation does establish something to which BPS entities now ought to strive. Of course, this raises the question of resourcing. Minister Lisa Thompson’s response to the interim report suggests that the government’s assistance will be indirect, via the Cyber Security Centre of Excellence’s learning portal.
I like speaking about incident response because there are so many important practical points to convey. Every so often I re-consolidate my thinking on the topic and do up a new slide deck. Here is one such deck from this week’s presentation at Canadian Society of Association Executives Winter Summit. It includes an adjusted four step description of the response process that I’m content with.
We’ve been having some team discussions over here about how incident response plans can be horribly over-built and unusable. I made the point in presenting this that one could take the four step model asset out in this deck, add add a modest amount of “meat” to the process (starting with assigning responsibilities) and append some points on how specific scenarios might be handled based on simple discussion if not a bona fide tabletop exercise.
Preparing for a cyber incident isn’t and shouldn’t be hard, and simple guidance is often most useful for dealing with complex problems.
Here’s a copy of a presentation I gave yesterday at the High Technology Crime Investigation Association virtual conference. It adresses the cyber security pressures on public bodies that arise out of access-to-information legislation, with a segment on how public sector incident response differs from incident response in the private sector
On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):
- The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
- The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
- The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
- System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
- Cathay Pacific could not provide evidence of adequate server hardening.
- Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
- The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
- Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
- Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
- Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
- Penetration testing was inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
- Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.
£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.
I blogged about Arbitrator Sudykowski’s decision in Providence Health when it was released in 2011 for its ratio – employers are entitled to more than a bare medical certification when an employee is absent from work.
I had occasion to use the case in a matter I argued yesterday, and was pleasantly surprised to re-read what Arbitrator Surdykowski said about data security and the impossibility of “ensuring” data security. The union had made an argument for minimizing the collection of health information that rested on data security risk, to which Mr. Surkyowski replied:
I agree with the Union’s assertion that there is always a possibility that private and confidential medical information may be inadvertently released or used inappropriately. Try as they might, it is impossible for anyone to absolutely guarantee information security. All that anyone can do in that respect is the best they can. There is nothing before me that suggests the extent to which the inadvertent (or intentional) release or misuse of confidential information occurs, either generally or at the workplaces operated by this Employer. More specifically, there is no indication of how often it happens, if at all, or that best efforts are demonstrably “not good enough”.
In a perfect world, the security and proper use of confidential private medical (or other) information could and would be guaranteed. But to be perfect the world would have to be populated by perfect human beings.
This is a nice quote to bring forward in this blog, of course, because it’s always a good to remind ourselves (and others) that the mere happening of a security incident doesn’t mean fault!
It’s a hard point to argue when hindsight bears heavily on a decision-maker, but is indisputable. I once defended on employer in a charge that followed a rather serious industrial accident in which an employee at truck dealership was run over by a tractor. The Court of Appeal held that the tractor wasn’t a “vehicle” for the purposes of the Occupational Health and Safety Act and entered an acquittal. In examining the context for this finding Justice Cronk made the same point as Arbitrator Surdykowski:
That said, consideration of the protective purposes of the legislative scheme is not the only consideration when attempting to ascertain the scope of s. 56 of the Regulation. The Act seeks to achieve “a reasonable level of protection” (emphasis added) for workers in the workplace. For obvious reasons, neither the Act nor the Regulation mandate or seek to achieve the impossible — entirely risk-free work environments.
Every security incident is an opportunity to tell a story about pre-incident due diligence that highlights this basic truth. (If your defence rests our horrendously vague privacy law you’re in trouble, I say.) It’s also reason to hold our tongues and not judge organizations who are victimized, at least before learning ALL the facts. Security incidents are complex. Data security is hard.
Here’s the second paper that relates to the panel I will be sitting on later this week. It is a collection of FOI case digests about the hacking threat with a covering thesis about the need for greater protection from disclosure. This one particularly caught my interest and includes some ideas I will come back to. Enjoy, and again, please send comments by PM.
Here’s a 10 minute presentation I gave to the firm yesterday that puts some trends in context and addresses recent breach notification amendments.
CORRECTION. I made a point in this presentation that the Bill 119 amendments to PHIPA remove a requirement to notify of unauthorized “access” – a positive add given the statute does not include a harms-related threshold for notification. Section 1(2) of the Bill, I have now noticed, amends the definition of “use” as follows: “The definition of ‘use’ in section 2 of the Act is amended by striking out ‘means to handle or deal with the information” and substituting ‘means to view, handle or otherwise deal with the information.’ The removal of “access” from the breach notification provision will therefore not invite a change.