Artificial intelligence and policing: a matter of trust

The prospect of increased police use of artificial intelligence (AI), especially around predictive policing, has raised concerns about potential bias and the need for transparency and explainability.

Dr. Nick Evans of the University of Tasmania (Australia) publishes an article in Policing Insight where he explains that, with the right safeguards, the use of AI could establish built-in objectivity for policing decisions and, potentially, greater confidence in making those decisions.

Although predictive policing applications raise the thorniest ethical and legal issues and thus deserve serious consideration, it is also important to highlight other applications of AI for policing.

Teagan Westendorf’s ASPI report, ‘Artificial Intelligence and Policing in Australia’, is a recent example. Westendorf claims that Australian government policies and regulatory frameworks do not sufficiently capture the current limitations of AI technology and that these limitations may compromise principles of safe and explainable AI and ethics in the context of policing.

AI can help investigations by speeding up the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process child exploitation material and thus help limit human exposure.

Like all humans, police officers may have conscious and unconscious biases that can influence decision making and outcomes of policing. Predictive policing algorithms often must be trained on data sets that capture these results.

All in all, a key advantage of AI lies in its ability to analyse large data sets and detect relationships too subtle for the human mind to identify. Making models more understandable by simplifying them may require trade-offs in sensitivity and therefore also in accuracy.

In fact, research suggests that when individuals trust the decision-making process, there is a higher likelihood that they will trust the outcomes in justice settings, even if these outcomes are unfavourable.

As Westendorf highlights, steps can be taken to mitigate bias, such as pre-emptively coding against predictable biases and involving human analysts in the processes of building and leveraging AI systems.

Recent research has found that there is a correlation between people’s level of trust in the police (which is relatively high in Australia) and their level of acceptance of changes in the tools and technology that the police use.

With these types of safeguards in place (as well as deployment reviews and evaluations), the use of AI may lead to establishing built-in objectivity for policing decisions and reducing reliance on heuristics and other subjective decision-making practices. Over time, the use of AI may help improve police outcomes.

However, the need for explainability is only one consideration for improving accountability and public trust in police use of AI systems, especially when it comes to predictive policing.

In another study, participants exposed to allegedly successful police applications of AI technology were more likely to support broader police use of these technologies than those exposed to unsuccessful uses or not exposed to examples of AI application.

This suggests that focusing on broader public trust in the police will be essential in order to maintain public trust and confidence in the use of AI in policing, regardless of the degree of algorithmic transparency and explainability. The goal of transparent and explainable AI should not ignore this broader context.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Leave a Reply