Reigniting Trust

Can We Trust Predictive Policing?

Duncan Purves Ph.D. is exploring the impact of algorithmic policing on community trust at the very time the nation is rethinking law enforcement

Predictive technology is used in many fields, from forecasting farm yields and weather patterns to analyzing how stocks will perform or how disease may spread. Now police are using it, too.

An increasing number of police departments are using algorithm-based systems not only to analyze where crime takes place, but to predict where it will happen next. Informed by vast quantities of data, these systems promise to help departments allocate resources, set watch schedules and take other measures to lower crime.

Duncan Purves

This is not some futuristic vision. It’s in use now and has been for years. Over the past decade, departments in Los Angeles, Chicago, Miami, New York and other cities—roughly 50 nationwide—have used commercial and home-brewed systems to pilot and test the utility of crime prediction.

Yet predictive technologies have been condemned in many quarters as tools that promote prejudice and racial injustice. Activists and civil rights groups have denounced them. A group of mathematicians launched a boycott. A federal appeals court issued harsh criticism. And the city of Santa Cruz in California, one of the first to adopt predictive policing, became the first to ban its use earlier this year.

As the entire country focuses on the role of police in the wake of George Floyd’s death, which by some counts moved more than 15 million Americans to participate in street protests nationwide, major questions about the role of predictive policing and its impact on people and neighborhoods have largely gone unaddressed.

In the field, the focus has been on efficacy—does it work? From an academic standpoint, questions about ethics, trust and the impact on police behavior and tactics are only now being explored, and many of those questions have gone largely unanswered.

“I think there’s space for a philosopher,” said Duncan Purves, an assistant professor of philosophy at the University of Florida, “somebody trained in ethics to make some headway on these issues.”

| Click here to read a Q&A with Duncan Purves. |

The Ethics of Prediction

Purves specializes in the ethical issues that surround artificial intelligence and has built a body of work studying autonomous weapons systems, which target enemies without human oversight. Predictive policing caught his eye a few years ago when he realized the field was attracting very little scholarship beyond questions of utility.

Last year, Purves won a National Science Foundation grant to study the ethics of predictive policing and the algorithms used to identify places and people at high risk of crime. He is also working with the University of Florida’s Consortium on Trust in Media and Technology to investigate the impact of predictive policing on community trust.

The goal of predictive policing is to make decisions based on data, removing human error and bias from law enforcement decisions. In practice, it is more complicated. The algorithms that drive these systems are complex and, often, proprietary. Watchdog organizations have repeatedly warned that predictive policing can lead to problems if not designed and used correctly.

As Purves explained it, these systems can have a major impact on how police departments operate. “Which form of the technology we adopt can really shape the form of policing that we end up adopting,” Purves said.

For example, a system that focuses only on geographic analysis could direct police activity disproportionately at low-income neighborhoods where property crimes are most likely to happen. That can aggravate relations between police and the community.

“This is exactly the kind of police work that a lot of protesters, and a lot of community activists, oppose right now, the kind of police work where they’re out in the community—in black communities—looking for people committing crimes,” Purves said. “Yet one could argue that this is exactly the kind of police work that this technology encourages. Once you’ve committed to using this technology, you have committed to this type of police work.”

Trust and Policing

 Predictive policing is still a young field, used in only a small percentage of America’s 17,000 law enforcement agencies. There is room for guidance and best practices, and a deeper understanding of the relationship between police and community.

“We wanted to do a deep dive to understand the foundation—the moral foundation—for these criticisms,” Purves said. “This is what philosophers do. We want to get to the very, very root, the deepest explanation of the phenomenon in question. We want to get a better understanding of the relationship of community trust and policing.”

 Purves has joined with Dr. Ryan Jenkins, an assistant philosophy professor at Cal Poly San Luis Obispo, and Dr. Juan Gilbert, chair of the Computer, Information Science & Engineering Department at the University of Florida, to convene a group of scholars across disciplines who can offer perspectives on predictive policing. The team includes criminologists, legal scholars, technologists and other experts.

The group will create an anthology of essays aimed at filling the academic gap in the field. They will also produce a syllabus for an undergraduate-level course. Perhaps most important, they will create a framework that police departments and others in government can use to evaluate the use of predictive technology.

“We promised to deliver a white paper report for use by policy makers and police departments in developing ethically informed best practices for the deployment of these technologies,” Purves said. “I think this is something that police departments and policy makers need.”