Identifying lies to improve security

A group of researchers at RAND Corporation published a report in which they explain that they discovered that machine learning (ML) models can identify signs of deception during national security background check interviews. The most accurate approach to detecting deception is an ML model that counts the number of times respondents use common words.

The researchers’ experiment worked as follows:

  • The 103 participants read a story about how, in 2013, Edward Snowden leaked classified information from the National Security Agency.
  • Participants were randomly assigned to read the same story, but it was presented either as a news report or as a memo with markings indicating that it contained confidential information.
  • Participants were assigned to one of two groups in order to be interviewed. One group was told to lie about what they had read and the other to tell the truth.
  • Former law enforcement officers interviewed participants via videoconference and random-order text-based chat.

The RAND researchers used the interview and chat transcripts to train different ML models to see if these could distinguish liars from truth-tellers.

These scholars reached three major conclusions:

  • It is not just what one says, but how one says it: frequency of words, cadence of speech, choice of words and other linguistic signals of potential lies.
  • ML models can detect signs of deception in the way people express themselves, even in text-based chats without the presence of a human interviewer.
  • The models are tools that can add to existing interviewing techniques, but they cannot completely replace these techniques.

In terms of the implications this may have for security, the researchers highlight the following:

  • Men take part in many of the background investigations for security clearances, and at least a quarter of security clearance applicants are women. It is important to understand how the gender of the interviewer might affect the modelling results.
  • Inappropriate use of ML tools could lead to inequities in the acceptance and rejection rates of security clearance applicants.
  • Due to potential biases in ML model results and in humans, it is important to maintain a system of checks and balances that includes both humans and machines.
  • The models found that men and women used different words to deceive. Men were less likely to use the word “I” when lying and more likely to use it when telling the truth.


Aquest apunt en català / Esta entrada en español / Post en français

Deixa un comentari