Forensic audio research in firearms use gains ground

Last August, the U.S. National Institute of Justice (NIJ) published the findings of an eight-year investigation by Dr. Robert C. Maher on the use of new forensic audio techniques to document and interpret firearm gunshot recordings.

The publication of his research came from the Office of Justice Programs’ National Criminal Justice Reference Service, and the author recalls the beginnings of his research through a phone call asking if a gun allegedly used in the commission of a crime could be matched with an audio recording from a crime scene of a fired gun.

Dr. Maher’s initial work in this field began with understanding the acoustic characteristics of gunshots by obtaining repeated, high-quality recordings that were carried out under controlled conditions. This effort matched a strategic objective of NIJ’s Office of Investigative and Forensic Sciences to support foundational research in forensic sciences.

To do so, he created a device and methodology that collected recordings of gunshots. In order to measure consistency, reliability and shot-to-shot variability, he collected data from a variety of firearms (five handguns, one revolver, one shotgun and two rifles).

Maher found that, although there are similarities when one fires the same gun 10 times, there are also notable differences from shot to shot. The duration of the blast varies from one firearm to another, but a given firearm also varies from one shot to another. Although the reason for the variability in duration is not yet known, the doctor suggests that this variation will have an effect on the forensic analysis of recordings that include gunshots of unknown origin.

As soon as he found a repeatable method for accurately recording gunshot acoustics under ideal conditions, Dr. Maher was ready to study the limitations of forensic interpretation of standard recording devices. This could cover mobile phones, land-mobile radios, personal audio recorders, audio data collected by emergency call centres and dispatch centre recording systems.

He compared signals at 11 different locations from microphones and personal recording devices, plus a body camera worn by the shooter and an internal recording system in a police vehicle. This allowed him to verify geometric predictions of arrival time and level at each recording site. For verification purposes, he also compared the times with a recording made by a mobile phone call to a corporate voice mail system.

He then examined several gunshot recordings simultaneously to see if relevant forensic information could be obtained, despite reflections, distortion, coding artefacts and other non-ideal characteristics. From the analyses, the doctor created a processing method to locate the source of the gunshots and reduce incoherent background noise, as well as a method to identify the most likely synchronisation point for multiple audio recordings.

For audio forensic analysis, it is more and more likely that several user-generated recordings may be presented as evidence in a criminal investigation. Audio evidence can come from portable smartphones, private surveillance systems, body-worn cameras and other unsynchronized recording devices. When numerous user-generated recordings are available, analysing the audio can yield spatial and temporal data about sound source location and orientation, including gunshots and other sounds.

Dr. Maher’s gunshot audio analysis was already used in the Cleveland Police Bureau’s trial of Michael Brelo, where Dr. Maher concluded that 15 of the 18 shots were fired from Brelo’s gun. Independent FBI investigations corroborated their findings.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Robotaxis: safety implications of their expansion

Robotaxis are not experimental test vehicles and this is no longer a drill. Many of San Francisco’s driverless ghost cars are commercial robotaxis, which compete directly with cabs, Uber and Lyft, as well as public transport. Although they make up a small portion, they are indeed an integral part of the city’s transportation system. Furthermore, Cruise and Waymo, the companies responsible for their operation, seem prepared to further extend their services in San Francisco, Austin, Phoenix, and potentially even Los Angeles in the upcoming months.

As researcher Benjamin Schneider denounced in the Technology review last July, there is a lack of urgency in the public discourse on robotaxis. He believes that most people, including many public decision-makers, are not aware of how fast this industry is advancing and how serious the short-term labour and transportation/safety impacts may be.

Designated agencies, such as the California Public Utilities Commission, make very important decisions about robotaxis in relative obscurity. Legal frameworks remain woefully inadequate: cities have no regulatory authority over the robotaxis that ply their streets, and the police cannot legally report them for infractions related to their movement.

Unfortunately, there is no government-approved standard framework for assessing the safety of autonomous vehicles. Cruise’s driverless vehicles, in particular, have demonstrated a concerning habit of unexpectedly coming to a halt in the middle of the road, causing significant traffic disruptions that last for prolonged periods. San Francisco police officials have documented at least 92 such incidents in just six months, including three that disrupted emergency services.

While these crucial stories hold significance, they overshadow the overall trend, which has been consistently favouring the growth of the robotaxi industry. During the recent years, Cruise and Waymo have successfully overcome significant regulatory challenges, ventured into new markets, and accomplished over a million uneventful, fully driverless miles in prominent American cities.

Robotaxis are operationally quite different from personally owned autonomous vehicles and are in a much better position for commercial deployment. These vehicles can be deployed in a tightly restricted region where they have undergone extensive training. The company responsible for their design can closely monitor their usage, and they can be promptly taken off the road in cases of adverse weather conditions or any other problems.

The very fact that these vehicles are programmed to adhere to traffic laws and speed limits inherently makes them seem like safer drivers compared to a significant portion of human drivers on the road.

The readiness of robotaxis for substantial deployment and the criteria to determine their readiness are still unknown and await further observation. But barring a significant change in momentum, such as an economic shock or horrible tragedy, robotaxis are positioned to continue their expansion. This is enough to warrant a broader discussion on how cities and society will change in the immediate future.

Cruise and Waymo are close to being authorized to offer all-day commercial robotaxi service in virtually all of San Francisco. This could immediately have a considerable economic impact on cab drivers in the city. The same is true for all the other cities where Cruise and Waymo are setting up shop. The prospect of automating professional drivers is no longer theoretical. This is a very real possibility for the near future.

As technology accelerates, public policy must accelerate along with it. But to keep up, citizens must have a clear vision of how fast the future might come.

_____

Aquest apunt en català / Esta entrada en español / Post en français

How to leverage artificial intelligence to strengthen cybersecurity

In June of last year, Jason Lau, the chief information security officer of Crypto.com, shared valuable insights on leveraging the potential of artificial intelligence (AI) to enhance cybersecurity through an article published on the renowned security website ooda.com.

The author believes that, in cybersecurity, it is necessary to establish a strategic advantage over criminals by proactively identifying and neutralising threats before they cause damage. In this regard, he also believes that continuous learning from past incidents can improve future responses, using AI-driven and guided tools to identify, understand and neutralise threats. Therefore, he proposes the following steps:

• Using an automated AI-driven threat intelligence platform that recognises external signatures, tactics, techniques and procedures in real time. This platform works to be significantly faster at identifying and neutralising phishing, malware and other endpoint threats by evolving and learning from attack methods.

• Implementing continuous automated alerting and monitoring of sensitive assets, from inventory used across the enterprise to scanning personally identifiable information to detect specific instances of plain text exposure and alerts to computers.

• Performing continuous AI-based code reviews, searching for code exceptions, cross-site scripting language errors, code injection, buffer overflows and more, and automatically replacing it with safe code while maintaining the functional integrity of the code.

• Engaging AI to detect malicious AI itself: indirect rapid injection attacks, which highlight emerging threats where adversaries attempt to infiltrate large language models, through AI used to detect malicious software and more.

As we move into an increasingly interconnected future, AI is undoubtedly a powerful ally, a cutting-edge piece of the puzzle, helping to quickly predict, prepare for and prevent impending cyberthreats.

However, it is also becoming very clear that owning such a powerful tool is not enough. Today’s cybersecurity leaders are called upon to do more than react and respond. They have to take a proactive stance, constantly planning, predicting and positioning their defences, making the necessary moves to stay one step ahead of the relentless wave of cyber adversaries.

However, by moving too fast, without a thoughtful and reflective approach, we risk treading on the fragile ground of ethics. How AI is used in cybersecurity is as important as why and where we choose to deploy it. As leaders, it is imperative to sew the seed of ethical considerations in our AI strategies, establishing a strong moral compass to guide us through the maze of technological possibilities.

In this cybersecurity chess game, tomorrow’s challenges are already on our doorstep. Having a cybersecurity plan is no longer an option, but a requirement for survival. But a critical aspect to consider is how AI can help empower security teams to become more agile and to adapt to new and emerging cyberthreats.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Is a pause in AI development the answer?

The undeniable challenges posed by chatbots and AI will not suddenly disappear. This is how the specialised security website Oodaloop put it. And while many well-informed and well-intentioned people have signed an open letter calling for a 6-month pause in advanced AI research, doing so is unrealistic and unwise. The problems posed are complex, confusing and difficult to solve because they are incredibly convoluted and polyhedral. It involves so many stakeholders, intersecting domains and competing interests that it makes it difficult to address them. A pause in technological research will not help solve these human conundrums.

What will help is systematic, methodical and massive public engagement that informs pilot projects associated with the business and civilian implications of artificial intelligence at the national and local levels. Everyone will be affected by the promises and potential dangers of the technological shift in thinking presented by advances in AI. Therefore, everyone should have a voice, and everyone should work to ensure that society is informed and prepared to thrive in a rapidly changing world that will soon look very different.

At first glance, stopping its development may seem compelling given the challenges posed by large language models (LLMs), but there are several reasons why this approach is flawed. To begin with, it is essential to take global competition into account. Even if all U.S. companies agreed to a pause, other countries would continue their AI research, making any national or international agreement less effective.

Secondly, AI diffusion is already underway. The Alpaca experiment at Stanford University demonstrated that it could be refined to match the capabilities of ChatGPT-3 for less than $600. This advance accelerates the spread of AI by making it more accessible to various actors, including those with malicious intent.

Thirdly, history teaches us that a pause in AI could lead to secret development. Publicly halting AI research could prompt nations to conduct advanced AI research in secret, which could have dire consequences for open society. This scenario is similar to that of the Hague Convention of 1899, where the major powers publicly banned poison-filled shells, only to continue their research in secret, and eventually deploy noxious gases during World War I.

Going forward, to effectively address the challenges arising from AI, a proactive, results-oriented and cooperative approach with the public should be encouraged. Think tanks and universities can engage in dialogue with the public about how to work, live, govern and coexist with modern technology that affects society as a whole. By including diverse voices in the decision-making process, we can better address and solve complex AI challenges at the regional and national levels.

In addition, industry and political leaders should be encouraged to participate in the search for non-partisan, multi-sectoral solutions to keep civil society stable. Working together, the gap between technological advances and their social implications can be bridged.

Finally, it is essential to pilot AI schemes in various sectors, such as labour, education, health, law and civil society. We should learn how to create responsible civil environments where AI can be responsibly developed and deployed. These initiatives will help us better understand and integrate artificial intelligence into our lives, reducing risk while ensuring that its potential is realised for the greater good.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Canadian police agencies use innovative app to assess mental health calls

British Columbia’s police chiefs have successfully negotiated with the provincial government for the financial support to develop an application that can detect, document and assess what kind of mental health resources would best serve people at risk.

This application is geared to help officers who respond to an intervention avoid conflict and determine what type of assistance is best for a person in crisis. B.C. police estimate that between 30 and 50% of calls for police service may involve mental health problems. Until now, the police often asked the same questions as the hospital. By doing so, before the scene is left, officers are already sharing information with the hospital and health professionals.

In an urgent, violent or high-risk situation, little information is sent to Surrey Memorial Hospital. When it is a non-life-threatening call, police officers complete a checklist (irritability, delusions, hallucinations, etc.) and the application generates a report that goes to a hospital physician, who may recommend intervention for that person under the Mental Health Act or suggest alternative care.

When an individual has been previously assessed with the HealthIM application, there is a baseline of information available to officers, including specific advice to de-escalate the intervention and avoid any triggers. Information can help de-escalate a situation earlier and more safely, resulting in better care for the affected person.

As Penny Daflos of CTV News Vancouver explains in a report, the HealthIM app, promoted as a better connection between police and the healthcare system, has been the fruit of a long, comprehensive plan to address one of the most problematic public safety challenges.

According to the Superintendent of the Royal Canadian Mounted Police, Todd Preston, and the president of the BC Association of Chiefs of Police, this app can help determine whether police service users actually need any medical assistance. This can prevent officers from spending busy hours waiting to hand patients over to health professionals, a procedure that can also stigmatise these people.

The Delta Police Department is currently the only agency already using the new system, which was first established among Ontario’s municipal police forces, and has been gradually implemented in all Prairie provinces.

Since implementing the app in 2019, time spent on paperwork has been reduced, reporting has been standardised, information sharing with healthcare workers has been improved and assessments were reduced by 331 people during 2021 alone.

_____

Aquest apunt en català / Esta entrada en español / Post en français

United States activates dedicated mental health crisis hotline

Although the Suicide Prevention Lifeline has long been a valuable resource, remembering the 10-digit phone number is not easy, especially during a crisis. Consequently, many people dial the 911 police emergency line, with calls for help that should have been directed to mental health specialists.

As reported by a U.S. Department of Justice website, the rollout of the new telephone and emergency service may be a breath of fresh air for people with mental crises or illnesses.

There is now a three-digit number that is easy to remember to call, chat with or text to get confidential access to mental health specialists 24 hours a day: 988.

This Suicide and Crisis Lifeline 988 has become a much-needed resource of great benefit, not only for at-risk individuals, but also for police departments, overwhelmed by a growing number of calls with mental health-related issues. It is currently estimated that, in some departments, 911 calls with mental health undertones are greater than 30% of total police service claims.

This service has been put in place because many people calling 911 with mental health emergencies ended up under arrest, in prison or cornered in hospital emergency departments waiting hours or even days for care. And they often ended up back on the street, in jail or in the hospital.

Parallel to the operation of the 988 service, a guide called 988 Suicide and Crisis Lifeline by SAMHSA has been edited with some suggestions for successfully implementing the new line:

  • Develop cross-system partnerships that connect mental health and other health, police and fire professionals with the agency that manages the call centre and the services that can be dispatched.
  • Engage key stakeholders, including government and community leaders.
  • It must be ensured that the community has the resources and infrastructure to help patients. SAMHSA’s National Guidelines for Behavioural Health Crisis Care can be used to identify what crisis services exist at the local, regional, or state level.
  • Review policies, procedures, and training materials to ensure that 988 is effectively incorporated into their crisis responses.
  • Take steps to make sure that calls can be seamlessly transferred between 988 and 911 to responding officers, so they can streamline 988 services when requested by a person in crisis.
  • 988 should be promoted. The public, as well as local law enforcement officers, should be educated on the operation of 988 as it is deployed.
  • Coordinate with federal stakeholders to ensure that the department and the community have the most up-to-date information on what is available in each state.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Identifying lies to improve security

A group of researchers at RAND Corporation published a report in which they explain that they discovered that machine learning (ML) models can identify signs of deception during national security background check interviews. The most accurate approach to detecting deception is an ML model that counts the number of times respondents use common words.

The researchers’ experiment worked as follows:

  • The 103 participants read a story about how, in 2013, Edward Snowden leaked classified information from the National Security Agency.
  • Participants were randomly assigned to read the same story, but it was presented either as a news report or as a memo with markings indicating that it contained confidential information.
  • Participants were assigned to one of two groups in order to be interviewed. One group was told to lie about what they had read and the other to tell the truth.
  • Former law enforcement officers interviewed participants via videoconference and random-order text-based chat.

The RAND researchers used the interview and chat transcripts to train different ML models to see if these could distinguish liars from truth-tellers.

These scholars reached three major conclusions:

  • It is not just what one says, but how one says it: frequency of words, cadence of speech, choice of words and other linguistic signals of potential lies.
  • ML models can detect signs of deception in the way people express themselves, even in text-based chats without the presence of a human interviewer.
  • The models are tools that can add to existing interviewing techniques, but they cannot completely replace these techniques.

In terms of the implications this may have for security, the researchers highlight the following:

  • Men take part in many of the background investigations for security clearances, and at least a quarter of security clearance applicants are women. It is important to understand how the gender of the interviewer might affect the modelling results.
  • Inappropriate use of ML tools could lead to inequities in the acceptance and rejection rates of security clearance applicants.
  • Due to potential biases in ML model results and in humans, it is important to maintain a system of checks and balances that includes both humans and machines.
  • The models found that men and women used different words to deceive. Men were less likely to use the word “I” when lying and more likely to use it when telling the truth.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Working to reduce deaths in police custody in the U.S.

The research website on security-related fields rand.org has published a study prepared by a group of researchers who have conducted research with the goal of decreasing the number of deaths occurring in U.S. law enforcement custody.

The group of U.S. researchers – Duren Banks, Michael G. Planty, Madison Fann, Lynn Langton, Dulani Woods, Michael J. D. Vermeer, and Brian A. Jackson – approached the research with a willingness to identify high-priority needs for the U.S. criminal justice system, starting with some important questions:

  • What are the different definitions and metrics of deaths occurring in law enforcement custody?
  • What barriers or facilitators affect the communication of this information at the state or national level?
  • What information about deaths taking place in police custody is crucial for supporting policies and practices that aim to reduce these deaths?

In 2013, the U.S. Congress enacted the Death in Custody Reporting Act (DCRA) to tackle the lack of reliable information on law enforcement-related deaths in correctional facilities.

The U.S. Department of Justice has undertaken several activities designed to respond to the provisions specified in the DCRA legislation, as well as its own federal mandates, for a comprehensive understanding of the prevalence and characteristics of deaths taking place in police custody. In spite of these efforts, at present no national data collection program represents all deaths occurring in law enforcement custody. These data are fundamental for supporting strategies to bring down the number of these deaths: promoting public safety through suitable responses to reported crimes, calls for service and police-community encounters, and building trust with communities.

To gain a better understanding of the needs on developing and leveraging data from a collection of national figures on law enforcement-related deaths, the researchers felt that limiting the scope of data collection to fatal incidents would be insufficient to understand and reduce deaths in law enforcement custody.

Among the recommendations of the study’s authors are:

  • Specify national standards for a more inclusive collection that encapsulates all critical incidents (fatal incidents and all those in which police use lethal force), regardless of whether the incident results in a death.
  • Support more trustworthy and comprehensive reporting in existing systems that depend on law enforcement participation by allocating resources to data providers, leveraging information previously collected by these agencies, and otherwise incentivizing participation.
  • Work with the research community, law enforcement and other relevant stakeholders to build appropriate indicators and toolkits and spread information on the appropriate and responsible use of these data.
  • Create a taxonomy of deaths or critical incidents taking place in the custody of law enforcement to provide the context necessary to understand the role of law enforcement.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Dispersal of homeless people criminalises them

A research study carried out by several criminologists in ten towns in England and Wales finds that demands for public space only end up recycling the problem of homelessness. Several English newspapers echoed the news, including The Guardian.

Councils using Public Space Protection Orders (PSPOs) to impose £100 fines aimed at controlling so-called ‘anti-social behaviour’ do nothing more than cause homeless people to come back to the same space time and time again.

The study has found that the dispersal of homeless people from city centres fails to stop this antisocial behaviour and instead causes a wrongful criminalisation of these people.

Research carried out by Sheffield Hallam University, with final recommendations for fairer treatment of people living on the streets, has been endorsed by Crisis, the homelessness charity. Councils in England and Wales using PSPO to impose £100 fines to control or prohibit behaviour such as drinking, pitching tents or sleeping in public space, simply see that with people living on the streets, this issue is not solved.

Orders are also misused to target behaviour that might not be considered antisocial, such as begging or sleeping rough, where an adverse effect is unlikely. What is more, in some cases homeless people have described the physical and verbal abuse they have received from police officers.

PSPOs have been used in England since 2014 with the aim of deterring behaviour deemed anti-social, but the focus on their impact on rough sleepers comes amid rising homelessness caused by an increase in evictions. Nearly 20,000 homes in England and Wales were left empty due to evictions during the 2021/22 period, almost 9,000 more than the previous period, according to annual figures released by the Department for Levelling Up, Housing and Communities.

In a seaside town in the east of England, for example, where begging, drug use, street drinking, urination and defecation, sleeping in public places or pitching tents are prohibited, you can often see locals or tourists strolling around while eating or drinking and this legislation does not apply to them.

A spokesman for the National Police Chiefs Council believes that recent joint work with Crisis has helped ensure that officers are able to understand why people end up sleeping rough, what support they need and, most importantly, what can be done to help them escape homelessness.

Cllr Nesil Caliskan from the Local Government Association’s safer and stronger communities board believes that PSPOs should be used as part of a broader set of measures that tie in with support services to help address the intrinsic causes of homelessness.

In other words, the dispersal powers associated with PSPOs have created vicious cycles of intimidation, dispersal and displacement that only recycle the problem of people living on the street rather than deterring, let alone preventing, the problems associated with homelessness. This would be one of the main conclusions of the study according to Peter Squires, Emeritus Professor of Criminology and Public Policy at the University of Brighton.

_____

Aquest apunt en català / Esta entrada en español / Post en français

Artificial intelligence and policing: a matter of trust

The prospect of increased police use of artificial intelligence (AI), especially around predictive policing, has raised concerns about potential bias and the need for transparency and explainability.

Dr. Nick Evans of the University of Tasmania (Australia) publishes an article in Policing Insight where he explains that, with the right safeguards, the use of AI could establish built-in objectivity for policing decisions and, potentially, greater confidence in making those decisions.

Although predictive policing applications raise the thorniest ethical and legal issues and thus deserve serious consideration, it is also important to highlight other applications of AI for policing.

Teagan Westendorf’s ASPI report, ‘Artificial Intelligence and Policing in Australia’, is a recent example. Westendorf claims that Australian government policies and regulatory frameworks do not sufficiently capture the current limitations of AI technology and that these limitations may compromise principles of safe and explainable AI and ethics in the context of policing.

AI can help investigations by speeding up the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process child exploitation material and thus help limit human exposure.

Like all humans, police officers may have conscious and unconscious biases that can influence decision making and outcomes of policing. Predictive policing algorithms often must be trained on data sets that capture these results.

All in all, a key advantage of AI lies in its ability to analyse large data sets and detect relationships too subtle for the human mind to identify. Making models more understandable by simplifying them may require trade-offs in sensitivity and therefore also in accuracy.

In fact, research suggests that when individuals trust the decision-making process, there is a higher likelihood that they will trust the outcomes in justice settings, even if these outcomes are unfavourable.

As Westendorf highlights, steps can be taken to mitigate bias, such as pre-emptively coding against predictable biases and involving human analysts in the processes of building and leveraging AI systems.

Recent research has found that there is a correlation between people’s level of trust in the police (which is relatively high in Australia) and their level of acceptance of changes in the tools and technology that the police use.

With these types of safeguards in place (as well as deployment reviews and evaluations), the use of AI may lead to establishing built-in objectivity for policing decisions and reducing reliance on heuristics and other subjective decision-making practices. Over time, the use of AI may help improve police outcomes.

However, the need for explainability is only one consideration for improving accountability and public trust in police use of AI systems, especially when it comes to predictive policing.

In another study, participants exposed to allegedly successful police applications of AI technology were more likely to support broader police use of these technologies than those exposed to unsuccessful uses or not exposed to examples of AI application.

This suggests that focusing on broader public trust in the police will be essential in order to maintain public trust and confidence in the use of AI in policing, regardless of the degree of algorithmic transparency and explainability. The goal of transparent and explainable AI should not ignore this broader context.

_____

Aquest apunt en català / Esta entrada en español / Post en français