Cyber, secrecy and the public body

Here’s a copy of a presentation I gave yesterday at the High Technology Crime Investigation Association virtual conference. It adresses the cyber security pressures on public bodies that arise out of access-to-information legislation, with a segment on how public sector incident response differs from incident response in the private sector

The twelve security failures underscoring the ICO’s recent £500,000 fine

On February 10th the Information Commissioner’s Office fined Cathay Pacific £500,000 for breaching the security principle established under the UK Data Protection Act. Here are the twelve security failures that were the basis of the finding (with underlined text in the ICO’s words plus my annotation):

    • The database backups were not encrypted. The ICO said this was a departure from company policy undertaken due to a data migration project, but a company approval and risk mitigation requirement was apparently not followed.
    • The internet-facing server was accessible due to a known and publicized vulnerability. The Common Vulnerabilities and Exposure website listed the vulnerability approximately seven years before it was exploited, said the ICO.
    • The administrator console was publicly accessible via the internet. This was done to facilitate vendor access, without a risk assessment according to the ICO. The ICO said the company ought to have used a VPN to enable vendor access.
    • System A was hosted on an operating system that was (and is) no longer supported. The ICO noted that the company neither replaced the system or purchased extended support.
    • Cathay Pacific could not provide evidence of adequate server hardening.
    • Network users were permitted to authenticate past the VPN without multi-factor authentication. The ICO noted that this allowed the attackers to mis-use stolen credentials (pertaining to a 41,000 user base).
    • The anti-virus protection was inadequate. This was apparently due to operating system comparability problems (on an operating system other than the legacy system on System A).
    • Patch management was inadequate. Logs were missing on some systems, the ICO said. It also noted that one server was missing 16 updates that resolved publicly known vulnerabilities, 12 of which were described as “easily exploitable.”
    • Forensic evidence was no longer available during the Commissioner ‘s investigation. The ICO said that servers images analyzed in the post-incident investigation were not retained and provided to the ICO.
    • Accounts were given inappropriate privileges. “Day-to-day” user accounts were given administrator privileges according to the ICO.
    • Penetration  testing  was  inadequate. The ICO said three years without penetration testing was inadequate given the quantity and nature of the information at issue, which included passport numbers.
    • Retention periods were too long. It appears (though is not clear) that transaction data was preserved indefinitely and that user data was purged after seven years of inactivity.

£500,000 is the maximum fine. The ICO said it was warranted, in part, because the failures related to “fundamental principles.” The failure to retain evidence was another notable factor.

Good quotes on the impossibility of “ensuring” security and achieving zero risk

I blogged about Arbitrator Sudykowski’s decision in Providence Health when it was released in 2011 for its ratio – employers are entitled to more than a bare medical certification when an employee is absent from work.

I had occasion to use the case in a matter I argued yesterday, and was pleasantly surprised to re-read what Arbitrator Surdykowski said about data security and the impossibility of “ensuring” data security. The union had made an argument for minimizing the collection of health information that rested on data security risk, to which Mr. Surkyowski replied:

I agree with the Union’s assertion that there is always a possibility that private and confidential medical information may be inadvertently released or used inappropriately.  Try as they might, it is impossible for anyone to absolutely guarantee information security.  All that anyone can do in that respect is the best they can.  There is nothing before me that suggests the extent to which the inadvertent (or intentional) release or misuse of confidential information occurs, either generally or at the workplaces operated by this Employer.  More specifically, there is no indication of how often it happens, if at all, or that best efforts are demonstrably “not good enough”.

In a perfect world, the security and proper use of confidential private medical (or other) information could and would be guaranteed.  But to be perfect the world would have to be populated by perfect human beings.

This is a nice quote to bring forward in this blog, of course, because it’s always a good to remind ourselves (and others) that the mere happening of a security incident doesn’t mean fault!

It’s a hard point to argue when hindsight bears heavily on a decision-maker, but is indisputable. I once defended on employer in a charge that followed a rather serious industrial accident in which an employee at truck dealership was run over by a tractor. The Court of Appeal held that the tractor wasn’t a “vehicle” for the purposes of the Occupational Health and Safety Act and entered an acquittal. In examining the context for this finding Justice Cronk made the same point as Arbitrator Surdykowski:

That said, consideration of the protective purposes of the legislative scheme is not the only consideration when attempting to ascertain the scope of s. 56 of the Regulation. The Act seeks to achieve “a reasonable level of protection” (emphasis added) for workers in the workplace. For obvious reasons, neither the Act nor the Regulation mandate or seek to achieve the impossible — entirely risk-free work environments.

Every security incident is an opportunity to tell a story about pre-incident due diligence that highlights this basic truth. (If your defence rests our horrendously vague privacy law you’re in trouble, I say.) It’s also reason to hold our tongues and not judge organizations who are victimized, at least before learning ALL the facts. Security incidents are complex. Data security is hard.

Threat Exchanges and FOI Legislation

Here’s the second paper that relates to the panel I will be sitting on later this week. It is a collection of FOI case digests about the hacking threat with a covering thesis about the need for greater protection from disclosure. This one particularly caught my interest and includes some ideas I will come back to. Enjoy, and again, please send comments by PM.

 

Cybersecurity and data loss (short presentation)

Here’s a 10 minute presentation I gave to the firm yesterday that puts some trends in context and addresses recent breach notification amendments.

CORRECTION. I made a point in this presentation that the Bill 119 amendments to PHIPA remove a requirement to notify of unauthorized “access” – a positive add given the statute does not include a harms-related threshold for notification. Section 1(2) of the Bill, I have now noticed, amends the definition of  “use” as follows: “The definition of ‘use’ in section 2 of the Act is amended by striking out ‘means to handle or deal with the information” and substituting ‘means to view, handle or otherwise deal with the information.’ The removal of “access” from the breach notification provision will therefore not invite a change.