Challenges of Predictive Analytics for Law Enforcement

Print Friendly, PDF & Email

The idea of predictive analytics in policing is certainly nothing new to law enforcement or its critics. Once a simple thought exercise (and the trope behind more than a few science fiction stories), we have come to a point in our technological evolution where certain tools utilized by agencies resemble, to some degree, those seen in stories like The Minority Report. Today, these powerful new capabilities draw a substantial amount of real-world controversy, which only grows as the technology fueling the debate more sophisticated.

Fraught with racial, social, privacy and socioeconomic concern, much of this controversy has centered on what happens when police use these tools to predict and prevent external crime. But there is a second, internal concern along the same lines presented to HR personnel and other hiring stakeholders within law enforcement agencies. Can the same types of tools legitimately help predict malicious or excessive use of force? Can they be used to root out other types of misbehavior within the ranks?

So far, the answer to both questions appears to be yes, but it’s a highly qualified yes. Like so many other technological tools “revolutionizing” the law enforcement world, treating these sensitive tools as the paramount authority can lead rank-and-file personnel to negative perceptions that damage the solution’s actual value. 

Defining Predictive Analytics in the Law Enforcement Context

Let’s say a law enforcement supervisor receives an email containing insights from an automated platform that a high-level HR manager just implemented. This document—full of data on a third-year recruit—suggests behavior patterns that have historically led to bigger misbehavior in other officers’ careers: information that could introduce the department to liability or, worse, result in someone being harmed or killed. For example, the data points in the analysis may include a history of substance abuse on the recruit’s part or a disproportionately high number of complaints that, while all dismissed, could be combined to tell a worrying story.

This, at a high level, is what a predictive solution may look like in the average law enforcement workplace. There is little doubt a tool with such capacity has value for departments and the public they serve. As one FiveThirtyEight piece on the matter says, and as readers undoubtedly know, an algorithm can analyze data at a rate far faster than humans, and its lack of human biases allows it to make connections in data that whole teams of humans may overlook—connections that are not intuitive but still seem to predict the outcome the solution is engineered to search for.

The Chicago Police Department’s Early Intervention Program, or EIP, is perhaps the most notable culmination of these ideas so far. In this instance, the predictive platform beneath the policy runs largely by searching, at large scale, for connections determined by the University of Chicago data scientists who designed the solution. The EIP system was set up as the result of the murder of Laquan McDonald at the hands of a Chicago police officer, to preemptively identify officers who may, by the data’s prediction, be even more likely for incidents of misbehavior in the future. These factors, explained in detail on a Chicago Police Department webpage, include tardiness and complaint thresholds (both sustained and non-sustained) during a defined time period.

The Problems of Predictive Analytics

The concerns surrounding predictive tools can be distilled into two categories: 

  • How accurate the forecast is.
  • What an agency should ultimately do with what it believes is accurate, actionable data. 

In other words, a prediction is a prediction because it hasn’t happened yet—and in a government industry dominated by unions, it would be extremely difficult for most supervisors to fire or even discipline an officer because the software says they are going to misbehave soon, no matter how much faith the agency puts in the prediction.

Agencies dabbling in any form of predictive analytics must always be conscious of the fact (and mathematician’s mantra) that data can fail in unexpected ways. While a sufficiently narrowed, highly customized set of parameters may be able to flag a future incident with some accuracy, as one Machine Learning Times piece notes, failure to ask the right questions in the design phase can create inaccurate forecasts that appear completely valid from ground level.

“[I]n reality there are many choices to be made along the way, and many pitfalls to catch the unwary. The ‘art’ of data science is about choosing ‘interesting questions’ to ask of the data.”

Effective prediction and forecasting require intense human intervention at every step of the process, from building a predictive pattern to applying it in the field.

External crime prediction has already encountered the questions that arise from relying on advanced analytics technology. Say an agency is utilizing a Geographic Information System (GIS) as part of a predictive crime reduction strategy. The agency, concerned about fairness and accuracy, will likely examine both the assumptions being made by the data and the outcomes it supposes will arise.

The system says this is a crime hotspot—what data does it use to make that qualification? Are those points what we want it looking for?”

Once the system has been in place for some time, the questions shift to:

Is this place still a crime hotspot? Why, if we’re putting additional prevention resources there? Is crime on the increase or are we examining this the wrong way? 

While concerns of fairness are often not as important in the employer–employee relationship as they are in the police–public relationship, agencies must nevertheless take similar pains in collecting data and acting upon it. To do this, human interaction must be as big a part of the analytics program as the software powering it.

Robust, Historic Data Collection—and Why it Matters

Likewise, while computers may excel at making “unfindable” connections in points of data, humans are still superior when it comes to using nuance and context to navigate situations. 

People (like computers) do best when they have large stores of knowledge from which to inform their conclusions—and especially when they’re able to access specific data that has evaded their immediate recall. This is where recordkeeping of sufficient depth, kept over a sufficient period, becomes ever more important. 

Consider the following two retellings: 

  1. Recruits Lewis and Davis were both subject to disciplinary measures for engaging in a fistfight in an academy class. 
  2. Recruit Davis was noted to have bullied another recruit on several occasions and witnesses in the class say Recruit Lewis was simply standing up for the victim before the aggressor shoved her down; she was disciplined for not stopping the fight when the supervisor yelled multiple times for them to stop.

Here, a checkmark in an Excel spreadsheet indicating discipline to both recruits would not be a fair retelling. However, were a supervisor to compare the employees as their employment progressed, this nuanced information provides important context for the supervisor’s comparison. The supervisor may note that the aggressive recruit had many more complaints on file than the other, for instance, and decide to personally intervene the next time a complaint occurs—as is suspected will happen.

In this situation, the human actor was best able to take action when systems powering their prediction gave them the info they needed at a glance. In this hypothetical situation and countless real-world agencies, the system was able to provide this critical support because it collected data with significant texture (instructor notes on the classroom incident) along with line-item points of collection over a long period of time. 

In Utah, the state’s Peace Officer Standards and Training Academy (POST; the agency tasked with providing initial and ongoing education to officers) knew that failure to keep accurate, in-depth training records could subject them to internal strife, lapses in education, legal liability, and more. To avoid these undesirable outcomes, it implemented a training and record-keeping system, giving them both the capacity to store more data per-employee and the ability to check what it collected against various self-defined employment and performance criteria—for example, quickly checking which officers are qualified for an unexpected promotion opportunity. 

Naturally, not every “unexpected” event will fall into such a neatly defined box. Instead of collecting data for any conceivable situation, the goal of effective data collection and prediction is to have data of such quality that it applies to most any situation. The distinction is small on paper, but massive in practice.

In all, predictive analytics are ultimately (like most other technologies) to create waves in law enforcement: capable in the right context, but only with a healthy dose of human intervention throughout the process. Wherever their own data collection efforts currently rest, law enforcement would be wise to handle the medium with due care.

About the Author

Ari Vidali is Founder & CEO of Envisage Technologies, creators of the Acadis Readiness Suite, a comprehensive, modular training management framework that modernizes and streamlines the complex operations of nearly 11,000 public safety agencies, serving over 2 million first responders via their FirstForward online training network. In his 20-year career in high-technology, Mr. Vidali has been the lead founder for 5 high-tech enterprises. Throughout his career, he has been instrumental in developing innovative readiness strategies for military, public safety and law enforcement commands. As an industry expert, inventor, speaker and author on the subjects of technology in support of readiness, he has been featured in numerous national and international publications including the NATO Science for Peace and Security IOS Press. Industry awards include the SLOAN-C best Practices Award and EISTA Best Whitepaper Award.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*