Organizations currently engaged in pandemic planning ought to consider the data and cybersecurity risks associated with the rapid adoption of telework. Planning should start now, with the following considerations in mind.
Remote access risks. Secure remote access should continue to be a requirement. In general, this means access through a virtual private network and multi-factor authentication. Though understandable, “band aid” solutions to enable remote access that depart from this requirement represent a significant risk. Some departure may be necessary, though all risks should be measured. In general, any solution that rests on the use of remote desktop protocol over the internet should be considered very high risk.
Data leakage risks. Efforts should be made to keep all data classified as non-public on the organization’s systems. This can be established by issuing hardware to take home or through secure remote access technology. The use of personal hardware is an option that should used together with a well-considered BYOD policy. Printing and other causes of data leakage should be addressed through administrative policy or direction. Consider providing direction on where and how to conduct telephone calls in a confidential manner.
Credential risks. New classes of workers may need to be issued new credentials. Although risks related to poor credential handling can be mitigated by the use of multi-factor authentication, clear and basic direction on password use may be warranted. Some have said that phishing attacks may increase in light of an increase in overall vulnerability as businesses deploy new systems and adjust. While speculative, a well-timed reminder of phishing risks may help.
Incident response risks. Quite simply, will your incident response plan still function when the workforce is dispersed and when key decision-makers may be sick? Who from IT will be responsible for coming on-site? How long will that take? If decision-makers are sick, who will stand in? These questions are worth asking now.
Hat tip to my colleague Matin Fazelpour for his input on this post.
On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):
- The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
- The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
- The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
- System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
- Cathay Pacific could not provide evidence of adequate server hardening.
- Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
- The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
- Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
- Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
- Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
- Penetration testing was inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
- Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.
£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.
On January 13th, the Court of Appeal for Ontario held that a convicted appellant did not have a reasonable expectation of privacy in “what could be seen and heard on [his] property from his neighbour’s [property].”
The police trespassed on an neighbour’s rural property to conduct surveillance, and they heard gunshots and saw two individuals with rifles outside of the appellant’s house. Based on these observations, the police obtained a warrant to search the appellant’s house. They ultimately secured one or more convictions on drug and weapons charges.
The Court held, that in the context, it did not matter that the police were trespassing. (The gunshots were loud, and the appellant’s property was abutted by a public road in any event.) It also held that the police did not obtain “personal information,” reasoning as follows:
What triggered the application for the first warrant was the sound of the discharge of a firearm – something that could scarcely be concealed – coupled with visual observations of persons outdoors either firing a rifle or holding a rifle. These were bare observations of physical acts. There was no personal information obtained.
This illustrates how the personal information concept is not as simple, and perhaps not as broad, as one might think. The facts observed clearly allowed the police to infer what was in the house and obtain, on the reasonable and probable grounds standard, a search warrant. Nonetheless, the Court held that the observations did not invite a collection of personal information.
R v Roy, 2020 ONCA 18 (CanLII).
On December 24th, the Court of Appeal for Ontario affirmed￼ the dismissal of a breach of confidence claim because the plaintiff did not make out a “detriment.”￼ Despite its affirmation, the Court held that the trial judge erroneously said that a breach of confidence plaintiff must prove “financial loss.”￼ It explained, “The concept of detriment is not tied to only financial loss, but is afforded a broad definition, including emotional or psychological distress, loss of bargaining advantage, and loss of potential profits.”
CTT Pharmaceutical Holdings, Inc. v. Rapid Dose Therapeutics Inc., 2019 ONCA 1018 (CanLII).
Onus weighs heavily in resolving an access request for information in lawyer invoices. Given the presumption of privilege established by the Supreme Court of Canada in Maranda v Richer, an institution need only identify the information at issue as information in lawyer invoices and ride the Maranda v Richer presumption.
The University of Calgary did just that in claiming that “narrative information” in certain legal invoices was exempt from the right of public access. The OIPC rejected the University’s claim, relying on the basic statutory onus embedded in section 71(1) of the Alberta Freedom of Information and Protection of Privacy Act and noting that the University provided no affidavit evidence. Regarding Maranda v Richer, the adjudicator said:
A common law principle such as that articulated in Maranda and one which is not clearly applicable in a circumstance other than when the state is contemplating exercising a search warrant in a law office, cannot serve to supplant the clear statement of Legislative intent set out in section 71(1) – that a public body must prove its case.
The Court held that this reasoning invited a reversible error. Long live Maranda v Richer.
University of Calgary v Alberta (Information and Privacy Commissioner), 2019 ABQB 950 (CanLII).
Today’s presentation for your enjoyment and use!
The United States Secret Service has issued a follow-up to its landmark 2002 report that reinforces the need for sound institutional threat assessment procedures. The full new report is here. It is a noteworthy read for school administrators responsible for risk management and security.
Threat assessment is a process by which institutions aim to collect and process behavioral information that raises a potential concern to so, when appropriate, they can engage in “robust” intervention aimed at helping a student at risk and preventing violent acts. It has been a best practice at K-12 and post-secondary institutions in Canada for over a decade and should not be controversial, though it does invite tension with privacy and anti-discrimination laws. And though it’s very easy to understand that privacy interests and accommodation rights give way when an individual poses a risk of harm that is “serious and imminent,” good threat assessment rests on intervention at much lower risk levels. As the Secret Service’s new report states, “The threshold for intervention should be low, so that schools can identify students in distress before their behavior escalates to the level of eliciting concerns about safety.”
The Secret Service’s new report is based on an analysis of 41 incidents of targeted school violence that occurred at K-12 schools in the United States from 2008 to 2017.
The following statistic – about failures in reporting – is the first thing that caught my attention.
Canadian institutions encourage classmates to report concerning behaviors to a single point of contact and often mandate employees to make such reports. The new report tells us nothing about whether that is working in Canada, but it’s a good question to consider given the above.
The report also identifies that the attackers in four out of the 41 incidents were referred to threat assessment, which invited a response summarized in the following table:
In Cases 1 and 3 the attacker appears to have been mis-assessed. (See the full report on Case 3 here.) Cases 2 and 4 may relate to a prescription the Secret Service gives based on a statistic that showed that 17 of the 41 attacks occurred after a break in school attendance: “These findings suggest that schools should make concerted efforts to facilitate positive student engagement following discipline, including suspensions and expulsions, and especially within the first week that the student returns to school.”