Could a computer predict violence? In Chicago, Illinois, an algorithm assesses everyone arrested by the police with a score of between 1 and 500. The process has been carried out for four years, and almost 400,000 Chicago citizens now have an official risk score for the police authorities.
This algorithm questioned by the law teacher of Columbia University, Andrew Guthrie Ferguson – the method has still not been made public – is part of a police strategy and may change street investigations. It may also mean a police-related big data in the US, depending on how it is perceived, whether it is an innovative focus for the reduction of violence or as an example of data-based social control.
In effect, the personalised threat score is shown automatically on police computer screens so that the officer can know the relative risk involved in stopping a suspect. The predictive score also who a proactive police intervention is aimed at. These interventions may range from a visit to a home carried out by police officers, to an extra police surveillance or a community meeting, and will transmit the same clear message: the police are watching you.
And while Chicago is at the forefront of predictive surveillance, it is not the only city. Other cities like New York and Los Angeles are thinking of a way to use the police-related big data to guide interventions involving risky individuals.
Predictive control based on people began in 2009 as an attempt to apply a public health focus on violence. The key is to identify predictive risk factors and try to resolve the underlying environmental causes. Chicago investigators developed an algorithm so that the police could give priority to those individuals with the highest risk score by analysing: past arrests for violent crime, weapon or narcotic-related crime, age when most recently arrested (the younger, the higher the score), incidents where the individual was the victim of an assault and criminal activity trend (whether this is increasing or decreasing). A computer then classifies the variables and writes a relative threat score to determine the probability of firearm use.
The police state that the guidance system points out the high percentage of victims of gunshots that could be predicted with precision. Critics have pointed out that the objective is excessive and ineffective, including dozens of thousands of people with high scores, but with no previous record of arrest for violent crime.
It is thought to be worrying that the threat score affects the equity of interaction between the police and people in the street. High-risk scores guide strategies to stop violence, which influence police contacts and those under closest surveillance. But threat scores also distort everyday police decisions about the use of force and reasonable suspicion. After all, once the police has the information that a person has a high threat score, this knowledge will increase crime-related suspicion and will increase the perceived danger, causing more frequent and aggressive interactions with people that the algorithm considers to be “high risk”.
This bias can also jeopardise the system. As described by the investigation by the Division of Civil Rights of the Department of Justice in 2017 of the Chicago Police Department, racial discrimination patterns continue to be a real problem. While one could expect the justice algorithm to avoid human bias, the reality is that these insertions (especially arrests) are affected by discretional decisions by all police officers as they patrol or investigate the suspect of a crime. Therefore, although the mathematics of big data may be “objective”, entries are not exempt of human bias, thereby distorting the final results.
Related links
_____
Aquest apunt en català / Esta entrada en español / Post en français