Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students

I’ve had a long time interest in threat assessment and its application by educational institutions in managing the risk of catastrophic physical violence, though it has been a good ten years since the major advances in Canadian institutional policy. Here is a pointer to a journal article about an apparent new United States trend – automated monitoring of online and social media posts for threat assessment purposes.

Author Amy B. Cyphert starts with an illustrative scenario that’s worth quoting in full:

In 2011, a seventeen–year–old named Mishka,1 angry that his friends had recently been jumped in a fight, penned a Facebook post full of violence, including saying that his high school was “asking for a [expletive] shooting, or something.” Friends saw the post and alerted school officials, who contacted the police. By the time psychologist Dr. John Van Dreal, who ran the Safety and Risk Management Program for Mishka’s Oregon public school system, arrived, Mishka was in handcuffs.4 Mishka and his classmates were lucky: their school system employed a risk management program, and Dr. Van Dreal was able to help talk with Mishka about what caused him to write the post. Realizing that Mishka had no intention of harming anyone, Dr. Van Dreal helped Mishka avoid being charged with a criminal offense. Dr. Van Dreal also arranged for him to attend a smaller school, where he found mentors, graduated on time, and is today a twenty–five–year–old working for a security firm.

Had Mishka’s story happened today, just eight short years later, it might have looked very different. First, instead of his friends noticing his troubled Facebook post and alerting his school, it might have been flagged by a machine learning algorithm developed by a software company that Mishka’s school paid
tens of thousands of dollars to per year. Although Mishka’s post was clearly alarming and made obvious mention of possible violence, a post flagged by the algorithm might be seemingly innocuous and yet still contain terms or features that the algorithm had determined are statistically correlated with a higher likelihood of violence. An alert would be sent to school officials, though the algorithm would not necessarily explain what features about the post triggered it. Dr. Van Dreal and the risk management program? They might have been cut in order to pay for the third-party monitoring conducted by the software company. A school official would be left to decide whether Mishka’s post warranted some form of school discipline, or even a referral to the authorities.

Cyphert raises good questions about the problem of bias associated with algorithmic identification and about the impact of monitoring and identification on student expression, privacy and equality rights.

My views are quite simple.

I set aside algorithmic bias as a fundamental concern because the baseline (traditional threat assessment) is not devoid of its own problems of bias; technology could, at least in theory, lead to more fair and accurate assessments.

I also put my main concern on the matter of efficacy. Nobody disputes that schools and higher education institutions should passively receive threat reports from community members. My questions. Has the accepted form of surveillance failed? What is the risk passive surveillance will fail? How will it fail? To what degree? Does that risk call for a more aggressive, active monitoring solution? Is there an active monitoring solution that is likely to be effective, accounting concerns about bias?

If active internet monitoring cannot be shown to be reasonably necessary, however serious the problem of catastrophic physical violence, I question whether it can be either legally justifiable or required in order to meet the standard of care. Canadian schools and institutions who adopt new threat surveillance technology because it may be of benefit, without asking the critical questions above may invite a new standard of care with tenuous underpinnings.

Cyphert, Amy B. (2020) “Tinker-ing with Machine Learning: The Legality and Consequences of Online Surveillance of Students,” Nevada Law Journal: Vol. 20 : Iss. 2 , Article 4.
Available at: https://scholars.law.unlv.edu/nlj/vol20/iss2/4