United States activates dedicated mental health crisis hotline

Although the Suicide Prevention Lifeline has long been a valuable resource, remembering the 10-digit phone number is not easy, especially during a crisis. Consequently, many people dial the 911 police emergency line, with calls for help that should have been directed to mental health specialists.

As reported by a U.S. Department of Justice website, the rollout of the new telephone and emergency service may be a breath of fresh air for people with mental crises or illnesses.

There is now a three-digit number that is easy to remember to call, chat with or text to get confidential access to mental health specialists 24 hours a day: 988.

This Suicide and Crisis Lifeline 988 has become a much-needed resource of great benefit, not only for at-risk individuals, but also for police departments, overwhelmed by a growing number of calls with mental health-related issues. It is currently estimated that, in some departments, 911 calls with mental health undertones are greater than 30% of total police service claims.

This service has been put in place because many people calling 911 with mental health emergencies ended up under arrest, in prison or cornered in hospital emergency departments waiting hours or even days for care. And they often ended up back on the street, in jail or in the hospital.

Parallel to the operation of the 988 service, a guide called 988 Suicide and Crisis Lifeline by SAMHSA has been edited with some suggestions for successfully implementing the new line:

  • Develop cross-system partnerships that connect mental health and other health, police and fire professionals with the agency that manages the call centre and the services that can be dispatched.
  • Engage key stakeholders, including government and community leaders.
  • It must be ensured that the community has the resources and infrastructure to help patients. SAMHSA’s National Guidelines for Behavioural Health Crisis Care can be used to identify what crisis services exist at the local, regional, or state level.
  • Review policies, procedures, and training materials to ensure that 988 is effectively incorporated into their crisis responses.
  • Take steps to make sure that calls can be seamlessly transferred between 988 and 911 to responding officers, so they can streamline 988 services when requested by a person in crisis.
  • 988 should be promoted. The public, as well as local law enforcement officers, should be educated on the operation of 988 as it is deployed.
  • Coordinate with federal stakeholders to ensure that the department and the community have the most up-to-date information on what is available in each state.


Aquest apunt en català / Esta entrada en español / Post en français

Identifying lies to improve security

A group of researchers at RAND Corporation published a report in which they explain that they discovered that machine learning (ML) models can identify signs of deception during national security background check interviews. The most accurate approach to detecting deception is an ML model that counts the number of times respondents use common words.

The researchers’ experiment worked as follows:

  • The 103 participants read a story about how, in 2013, Edward Snowden leaked classified information from the National Security Agency.
  • Participants were randomly assigned to read the same story, but it was presented either as a news report or as a memo with markings indicating that it contained confidential information.
  • Participants were assigned to one of two groups in order to be interviewed. One group was told to lie about what they had read and the other to tell the truth.
  • Former law enforcement officers interviewed participants via videoconference and random-order text-based chat.

The RAND researchers used the interview and chat transcripts to train different ML models to see if these could distinguish liars from truth-tellers.

These scholars reached three major conclusions:

  • It is not just what one says, but how one says it: frequency of words, cadence of speech, choice of words and other linguistic signals of potential lies.
  • ML models can detect signs of deception in the way people express themselves, even in text-based chats without the presence of a human interviewer.
  • The models are tools that can add to existing interviewing techniques, but they cannot completely replace these techniques.

In terms of the implications this may have for security, the researchers highlight the following:

  • Men take part in many of the background investigations for security clearances, and at least a quarter of security clearance applicants are women. It is important to understand how the gender of the interviewer might affect the modelling results.
  • Inappropriate use of ML tools could lead to inequities in the acceptance and rejection rates of security clearance applicants.
  • Due to potential biases in ML model results and in humans, it is important to maintain a system of checks and balances that includes both humans and machines.
  • The models found that men and women used different words to deceive. Men were less likely to use the word “I” when lying and more likely to use it when telling the truth.


Aquest apunt en català / Esta entrada en español / Post en français

Working to reduce deaths in police custody in the U.S.

The research website on security-related fields rand.org has published a study prepared by a group of researchers who have conducted research with the goal of decreasing the number of deaths occurring in U.S. law enforcement custody.

The group of U.S. researchers – Duren Banks, Michael G. Planty, Madison Fann, Lynn Langton, Dulani Woods, Michael J. D. Vermeer, and Brian A. Jackson – approached the research with a willingness to identify high-priority needs for the U.S. criminal justice system, starting with some important questions:

  • What are the different definitions and metrics of deaths occurring in law enforcement custody?
  • What barriers or facilitators affect the communication of this information at the state or national level?
  • What information about deaths taking place in police custody is crucial for supporting policies and practices that aim to reduce these deaths?

In 2013, the U.S. Congress enacted the Death in Custody Reporting Act (DCRA) to tackle the lack of reliable information on law enforcement-related deaths in correctional facilities.

The U.S. Department of Justice has undertaken several activities designed to respond to the provisions specified in the DCRA legislation, as well as its own federal mandates, for a comprehensive understanding of the prevalence and characteristics of deaths taking place in police custody. In spite of these efforts, at present no national data collection program represents all deaths occurring in law enforcement custody. These data are fundamental for supporting strategies to bring down the number of these deaths: promoting public safety through suitable responses to reported crimes, calls for service and police-community encounters, and building trust with communities.

To gain a better understanding of the needs on developing and leveraging data from a collection of national figures on law enforcement-related deaths, the researchers felt that limiting the scope of data collection to fatal incidents would be insufficient to understand and reduce deaths in law enforcement custody.

Among the recommendations of the study’s authors are:

  • Specify national standards for a more inclusive collection that encapsulates all critical incidents (fatal incidents and all those in which police use lethal force), regardless of whether the incident results in a death.
  • Support more trustworthy and comprehensive reporting in existing systems that depend on law enforcement participation by allocating resources to data providers, leveraging information previously collected by these agencies, and otherwise incentivizing participation.
  • Work with the research community, law enforcement and other relevant stakeholders to build appropriate indicators and toolkits and spread information on the appropriate and responsible use of these data.
  • Create a taxonomy of deaths or critical incidents taking place in the custody of law enforcement to provide the context necessary to understand the role of law enforcement.


Aquest apunt en català / Esta entrada en español / Post en français

Dispersal of homeless people criminalises them

A research study carried out by several criminologists in ten towns in England and Wales finds that demands for public space only end up recycling the problem of homelessness. Several English newspapers echoed the news, including The Guardian.

Councils using Public Space Protection Orders (PSPOs) to impose £100 fines aimed at controlling so-called ‘anti-social behaviour’ do nothing more than cause homeless people to come back to the same space time and time again.

The study has found that the dispersal of homeless people from city centres fails to stop this antisocial behaviour and instead causes a wrongful criminalisation of these people.

Research carried out by Sheffield Hallam University, with final recommendations for fairer treatment of people living on the streets, has been endorsed by Crisis, the homelessness charity. Councils in England and Wales using PSPO to impose £100 fines to control or prohibit behaviour such as drinking, pitching tents or sleeping in public space, simply see that with people living on the streets, this issue is not solved.

Orders are also misused to target behaviour that might not be considered antisocial, such as begging or sleeping rough, where an adverse effect is unlikely. What is more, in some cases homeless people have described the physical and verbal abuse they have received from police officers.

PSPOs have been used in England since 2014 with the aim of deterring behaviour deemed anti-social, but the focus on their impact on rough sleepers comes amid rising homelessness caused by an increase in evictions. Nearly 20,000 homes in England and Wales were left empty due to evictions during the 2021/22 period, almost 9,000 more than the previous period, according to annual figures released by the Department for Levelling Up, Housing and Communities.

In a seaside town in the east of England, for example, where begging, drug use, street drinking, urination and defecation, sleeping in public places or pitching tents are prohibited, you can often see locals or tourists strolling around while eating or drinking and this legislation does not apply to them.

A spokesman for the National Police Chiefs Council believes that recent joint work with Crisis has helped ensure that officers are able to understand why people end up sleeping rough, what support they need and, most importantly, what can be done to help them escape homelessness.

Cllr Nesil Caliskan from the Local Government Association’s safer and stronger communities board believes that PSPOs should be used as part of a broader set of measures that tie in with support services to help address the intrinsic causes of homelessness.

In other words, the dispersal powers associated with PSPOs have created vicious cycles of intimidation, dispersal and displacement that only recycle the problem of people living on the street rather than deterring, let alone preventing, the problems associated with homelessness. This would be one of the main conclusions of the study according to Peter Squires, Emeritus Professor of Criminology and Public Policy at the University of Brighton.


Aquest apunt en català / Esta entrada en español / Post en français

Artificial intelligence and policing: a matter of trust

The prospect of increased police use of artificial intelligence (AI), especially around predictive policing, has raised concerns about potential bias and the need for transparency and explainability.

Dr. Nick Evans of the University of Tasmania (Australia) publishes an article in Policing Insight where he explains that, with the right safeguards, the use of AI could establish built-in objectivity for policing decisions and, potentially, greater confidence in making those decisions.

Although predictive policing applications raise the thorniest ethical and legal issues and thus deserve serious consideration, it is also important to highlight other applications of AI for policing.

Teagan Westendorf’s ASPI report, ‘Artificial Intelligence and Policing in Australia’, is a recent example. Westendorf claims that Australian government policies and regulatory frameworks do not sufficiently capture the current limitations of AI technology and that these limitations may compromise principles of safe and explainable AI and ethics in the context of policing.

AI can help investigations by speeding up the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process child exploitation material and thus help limit human exposure.

Like all humans, police officers may have conscious and unconscious biases that can influence decision making and outcomes of policing. Predictive policing algorithms often must be trained on data sets that capture these results.

All in all, a key advantage of AI lies in its ability to analyse large data sets and detect relationships too subtle for the human mind to identify. Making models more understandable by simplifying them may require trade-offs in sensitivity and therefore also in accuracy.

In fact, research suggests that when individuals trust the decision-making process, there is a higher likelihood that they will trust the outcomes in justice settings, even if these outcomes are unfavourable.

As Westendorf highlights, steps can be taken to mitigate bias, such as pre-emptively coding against predictable biases and involving human analysts in the processes of building and leveraging AI systems.

Recent research has found that there is a correlation between people’s level of trust in the police (which is relatively high in Australia) and their level of acceptance of changes in the tools and technology that the police use.

With these types of safeguards in place (as well as deployment reviews and evaluations), the use of AI may lead to establishing built-in objectivity for policing decisions and reducing reliance on heuristics and other subjective decision-making practices. Over time, the use of AI may help improve police outcomes.

However, the need for explainability is only one consideration for improving accountability and public trust in police use of AI systems, especially when it comes to predictive policing.

In another study, participants exposed to allegedly successful police applications of AI technology were more likely to support broader police use of these technologies than those exposed to unsuccessful uses or not exposed to examples of AI application.

This suggests that focusing on broader public trust in the police will be essential in order to maintain public trust and confidence in the use of AI in policing, regardless of the degree of algorithmic transparency and explainability. The goal of transparent and explainable AI should not ignore this broader context.


Aquest apunt en català / Esta entrada en español / Post en français

Police Scotland urged to develop protocols for the use of body-worn video cameras

Research commissioned by Police Scotland finds that there is widespread public support for officers to wear body cameras when required for all types of incidents, but also warns of the pitfalls that should be avoided.

A team from the Centre for Research into Information, Surveillance and Privacy (CRISP) at the University of Stirling has produced a report on the use of body-worn video (BWV) cameras, based on a literature review and semi-structured interviews with experts on these types of cameras.

The researchers emphasised that, before introducing BWVs, Police Scotland must ensure that there are effective governance and control processes in place, especially with respect to data management.

Professor William Webster from the University of Stirling’s School of Management, who led the work in the report, believes that body-worn video cameras seem like a simple concept, as it is a camera that police carry everywhere, but how it is used influences a complicated set of relationships, starting with the relationship between the citizen and the state. It is important to understand the consequences of this use and how technology shapes behaviour, to be sure that these cameras are used in the interest of society and not just in the interest of the police.

Webster believes that the police like the use of BWVs because they offer protection during police interventions, even more so if these involve risk, as they can help, for example, to de-escalate violence, while also being able to collect evidence in the event of a trial. However, they also put the police under surveillance, as there have been cases in other police forces of law enforcement officers being recorded smoking while on duty or talking on their mobile phones while driving. This also has to be taken into account, given that trust in technology and the police can easily be lost. Clear protocols and training on the use of BWVs should therefore be established.

The research also considers that it should be clear who manages the recordings, given that they are sensitive data of citizens. Along these lines, it asks whether officers should download the recordings at the end of shifts, where they should be downloaded to and where they should be stored, who can access them and what circumstances justify keeping these recordings.

Therefore, it is concluded that there is a need for a monitoring mechanism where recordings are randomly checked, possibly with lay persons, to control how BWVs are used.

He also stresses the importance of organisations such as the police continuing to consult with citizens and academics on the introduction of new technologies.

In this regard, the Chief Superintendent of Police Scotland, Matt Richards, was in favour of introducing BWVs in Scotland’s police force.

Institutionally, it is considered that the introduction of BWVs requires significant financial investment, but has the potential to enhance the vital bond of trust between police and citizens, which underpins their legitimacy.

What researchers and police agree on is that the deployment of BWVs should be ethical and transparent and should be supported and guided by ethical, human rights and civil liberties considerations.


Aquest apunt en català / Esta entrada en español / Post en français

Evaluating Aerial Systems for Crime-Scene Reconstruction

Security professionals are using ICTs as an opportunity to propose new approaches for modifying or creating innovative strategies for crime prevention and response, thus achieving greater effectiveness and efficiency. New remote sensing technology mounted on drones offers the possibility of improving crime scene reconstruction, alongside the traditional model, i.e., the physical inspection performed by an agent at the crime scene.

In order to analyse the characteristics and differences between the aerial system (drone) and the terrestrial system (laser scanning), simulations of three different outdoor scenarios envisaged in the 2021 report “Evaluating Aerial Systems for Crime-Scene Reconstruction” [1] by the National Institute of Justice were conducted at the Crisis City Training Center near Salina, Kansas: (1) an urban scene recreating a carjacking and shooting including broken glass, bullet casings and pools of blood; (2) a forest where a suicide has taken place, with empty alcohol containers and narcotics; and (3) an open field with a clandestine grave, a shovel, a cell phone and clothing.

With the results of these simulations, the differences that became apparent were, on the one hand, in favour of the aerial system (and opposed to terrestrial laser scanning): (1) it does not require forensic personnel to walk through the crime scene, risking contamination and/or destruction of evidence and bodily harm from hazardous environments; (2) it allows faster data capturing of the entire crime scene; (3) it is cheaper, costing about $15,000 (the conventional terrestrial laser comes in at about $75,000); and (4) by capturing information from above, there are no blind spots, unlike in laser scanning, where blind spots occur if there are insufficient scanning positions or obstacles.

On the other hand, laser scanning results in a higher image accuracy, with an error level of about 1 mm, and capable of preserving the quality in the dark and regardless of the environmental conditions. As far as the drone is concerned, the error level was about 1 cm and, moreover, in the case of open spaces such as the forest, the height of the drone (to avoid hitting the treetops) reduced the quality of the image. The atmospheric variables (cloud coverage, temperature, wind, precipitation…) must also be taken into account, which will condition the efficiency of the aerial system.

Therefore, when reconstructing 3D images of the (simulated) crime scene, maximum efficiency is achieved by complementing terrestrial and aerial scanning, as the combination of both systems allows faster data capturing of the entire crime scene, while maintaining a higher level of accuracy. The procedure is a non-intrusive technique that helps investigators prevent scene contamination, and the results can help officers, lawyers and judges “walk through” the scene at any time, even years later, as well as verify details such as distances and sight.

[1] Informe https://nij.ojp.gov/topics/articles/evaluating-aerial-systems-crime-scene-reconstruction


Aquest apunt en català / Esta entrada en español / Post en français

France installs noise radars in seven municipalities

For the first time, France is installing noise radars that will be responsible for monitoring noise levels emitted by vehicles in areas limited to 50 km/h. It is estimated that the fines imposed for exceeding the permitted noise levels could reach 135 euros.

The new noise radars have been installed in seven municipalities. The project, which will last for two years, went into operation a few weeks ago. This initiative is the first of its kind in Europe, so it will take time to evaluate the results and draw conclusions.

When France introduced radar speed guns twenty years ago, it drastically reduced the numbers of traffic accidents, which helped save thousands of lives. The new sensors or noise radars will, for the time being, be a test. Sensors will be able to detect and record vehicles that emit excessive noise, a growing problem in recent years. The authorities’ hope is to set a noise pollution limit and fine motorists who exceed it.

The initiative follows the growing intolerance of the French to street noise, especially from motorcycles and modified scooters. According to a study by Bruitparif – a French state-supported centre that monitors noise in the Paris area – a modified scooter crossing Paris at night can wake up to 10,000 people.

According to a study recently published by the European Environment Agency, noise pollution causes around 16,000 premature deaths in Europe and 72,000 hospitalisations a year. Noise from road traffic is one of the main causes of these poor figures.

After air pollution, noise is the second environmental factor that causes the most health problems, according to the World Health Organization in a 2011 report, increasing the risk of cardiovascular disorders and high blood pressure.

According to the new decree published in the French official gazette, the radars, which are equipped with microphones and cameras to capture the license plate of the offending vehicle, must be located on the hard shoulder.

The first radars installed are located in Saint-Lambert, a town located in the south-west region of Paris that is often part of the route of motorcyclists, ATV drivers and similar vehicles. According to measurements carried out by the Bruitparif agency, noise levels recorded last year in this area were between 210 and 520 dB.

More noise radars will gradually arrive in other municipalities, such as Nice or Toulouse. Although the decibel limit allowed before committing the offence has not yet been established, the first tests set a maximum of 90 dB.

Other measures to be implemented in parallel to the noise radars will be the reduction of the speed limit and the planting of trees and various types of vegetation along the often-congested Paris ring road. Dan Lert, deputy mayor in charge of this plan, adds that emergency vehicles will be ordered to lower the volume of their sirens at night.


Aquest apunt en català / Esta entrada en español / Post en français

Comprehensive review of the College of Policing in the United Kingdom

The UK College of Policing has carried out a thorough review, devised to improve leadership, standards and professionalism throughout the police force, with the aim of helping the police themselves and improving public service. Thus, three basic priorities have been established:

  • Promote professionalism, ensuring that officers and staff have access to the best continuous training development and that it is appropriately prioritised.
  • Improve leadership in officers and staff at all levels to develop their leadership skills.
  • Promote consistency, overcoming the weaknesses of the model of the 43 police forces, to promote consistency where it matters most to the public and those who work in policing.

This review of the College of Policing was launched in March 2021 by Lord Herbert of South Downs, chair of the College of Policing. Objectives include:

  • Conduct an evaluation of the College’s role, its effectiveness and how it operates in conjunction with other policing organisations.
  • Ensure that the College is highly valued by all sectors of the police, from frontline officers and staff to chiefs and police commissioners.

Policing is becoming increasingly complex, and the culture and standards in the service are subject to increasing scrutiny.

To implement these challenges, the review included extensive consultation with people of different ranks, grades and roles from across the police force to find out what they want from their College of Policing. This included individual interviews, written evidence, focus groups, visits to police forces and a survey of some 15,000 officers and other personnel.

Respondents acknowledged the College’s success in addressing problems in some critical areas of policing, such as the response to the covid-19 pandemic, and identified future challenges for policing.

Identified challenges included:

  • Lack of professional development.
  • Insufficient investment in leadership development in all ranks.
  • Lack of coordinated strategic thinking across the police force.
  • Diffusion of responsibilities at the national level.
  • Insufficient equipment to respond to the growing digital aspects of crime.

Suggested improvements to address these problems include having the College act as a national centre for police leadership and make guidance and knowledge of what works more accessible to those on the front line through a College application, to introduce a consistent new approach to personal development for everyone in policing.

With these changes, the police will be better able to respond to the challenges they face, improve community confidence, reduce crime and keep citizens safe.


Aquest apunt en català / Esta entrada en español / Post en français

5 things you should know about artificial intelligence

Although artificial intelligence has been the subject of academic research since 1950 and has been used commercially in some industries for decades, it is still in its infancy in all sectors.

The rapid adoption of this technology, coupled with the unique issues of privacy, security and accountability associated with it, has created opportunities for attempts to ensure that its use is ethical and legal.

On the specialised website Abajournal, authors Brenda Leong and Patrick Hall outline five things you should know about artificial intelligence:

1. Artificial intelligence is probabilistic, complex and dynamic. Machine learning algorithms are incredibly complex, learning billions of rules from datasets and applying those rules to arrive at an output recommendation.

2. Make transparency an actionable priority. The complexity of AI systems makes it difficult to ensure transparency, but organisations implementing AI can be held accountable if they are unable to provide certain information about their decision-making process.

3. Bias is a significant problem, but not the only one. AI systems learn by analysing billions of data points collected from the real world. This data can be numeric, categorical – such as gender and education level – or image-based, such as photos or videos. Because most systems are trained using the data generated by existing human systems, the biases that permeate our culture also permeate the data. Thus, there can be no such thing as an unbiased AI system.

Data privacy, information security, product liability and third-party sharing, as well as performance and transparency issues, are equally critical.

4. AI system performance is not limited to accuracy. While the quality and value of an AI system is largely determined by its accuracy, this alone is not sufficient to fully measure the wide range of risks associated with the technology. But focusing too much on accuracy is likely to ignore the transparency, fairness, privacy and security of a system.

Data scientists or lawyers, for example, should work together to create more robust ways of verifying AI performance that focus on the full spectrum of real-world performance and potential harms, whether from security threats or privacy shortfalls.

5. The hard work has just begun. Most organisations using AI technology must adopt policies to ensure the development and use of the technology and guidance for systems to comply with regulations.

Some researchers, practitioners, journalists, activists and lawyers have begun this work to mitigate the risks and liabilities posed by current AI systems. Companies are beginning to define and implement AI principles and make serious attempts at diversity and inclusion for technology teams.


Aquest apunt en català / Esta entrada en español / Post en français