Reigniting Trust

The Role of Technology in Police Reform

When departments reach for algorithmic technology, what questions should they be asking?

Duncan Purves

Duncan Purves is an Assistant Professor of Philosophy at the University of Florida, specializing in ethical issues concerning artificial intelligence. The interview has been edited for length and style.  

 Tell us a little bit about yourself and your work.

I have PH.D. in philosophy, trained in what we call the Western analytic tradition. My training was in theoretical issues that philosophers like to spend time on. But I turned my attention to technology as I was finishing my Ph.D. One of my peers at the University of Colorado got an email from a previous collaborator, basically just asking him if he had heard of or thought of any interesting new angles about ethical issues pertaining to autonomous weapon systems. These are like drones, but they do the targeting and the killing for you. They don’t have a human operator, or they don’t have to have one. He and I sat down, and we started thinking about this stuff. I realized that a bunch of the lessons that I had learned, a bunch of tools that I learned in my abstract theoretical work on moral philosophy, could be brought to bear on these questions. From there, I published a number of articles on those issues.

| Click here to read about Duncan Purves’ research. |

You’re exploring ethical issues around these systems?

Yes. We’re interested specifically in the ethical issues. There are all sorts of interesting legal and constitutional questions that sort of intersect with and sometimes overlap or are grounded in the ethical questions. But as a philosopher, I’m not an expert in law. I’m more interested in questions of ethics. One of the big questions that I’ve talked about in a couple of papers is, “can we trust autonomous weapon systems to make the morally correct decision—can we ever expect them to have the capacity to reason morally in ways that we would rely on?”

That got me into what we call the ethics of technology. That’s an emerging field of philosophy. I had been thinking about the ethics of racial profiling and I came across a book by Andrew Ferguson. He has a great book called Big Data Policing. It’s one of the first books to address some of the constitutional issues that arise for the use of big data analytics in various policing technologies. I saw a lot of overlap between the debate on issues arising for big data technology in policing. So, that got me started down this road. I started thinking about whether there was a bigger project here.

There is this emerging cluster of what I call artificial intelligence technology in policing and the almost all of the stuff that I’m seeing on these issues is in newspapers, magazines and law journals. I think there’s space for a philosopher, somebody trained in ethics to make some headway on these issues.

Tell us about your research.

The main subject of our project is predictive policing, the use of algorithms to forecast where crime is likely to occur. Where and when, and sometimes by whom.

One of the major concerns that people have is that citizens can’t know why these algorithms make the determination that they do, because either the processes used are too complex … or they’re protected by intellectual property. So, people—activists, journalists—have raised concerns. There’s also a concern that these systems can reinforce bias in policing practices. They can take biased policing data and use that data to generate biased forecasts. Then, send more police [to certain communities], reinforcing a cycle of bias.

We realized there’s quite a bit of work to be done in articulating exactly why these people are concerned, whether these are sound criticisms of these systems. There’s just not a whole lot of academic literature on this stuff. A lot of this stuff is coming out of community organizations, right? They are already suspicious of the police.

We wanted to do a deep dive to understand the foundation—the moral foundation—for these criticisms. This is what philosophers do. We want to get to the very, very root, the deepest explanation of the phenomenon in question. We want to get a better understanding of the relationship of community trust and policing. We wanted to understand in a deeper way what trust of police is, what it means for the community to trust police.

Will it help them do their job or is there something more fundamental? Is community trust in some way fundamentally important for establishing legitimacy, or the authority of a police department? That’s the deeper question about legitimacy. Getting back around to algorithms, I think it’s important how the use of algorithmic technology—non-transparent algorithmic technology—could undermine community trust in law enforcement

These are legitimate questions about trust. I think the clearest intersection between my work and the consortium on trust is we both want to come to understand better the way in which people’s trust matters. The way in which we trust matters fundamentally to our public institutions, but also the way that technology can affect the trust between citizens and criminal justice institutions.

Tell us about your project with the National Science Foundation.

We’re developing a kind of framework for comprehensive ethical assessment of the development and deployment of algorithmic criminal justice technology. But we want to use this framework.

In developing this framework, we’re speaking with a bunch of experts from outside of philosophy, because philosophers really only know so much. We’re talking to legal experts, who have written on predictive policing. We’re talking to people working on issues of bias and fairness from the computer science perspective. We’re talking to people working in criminology, who have studied these technologies through the lens of their own discipline. We’re bringing these experts together as consultants.

We promised to deliver a white paper report for use by policy makers and police departments in developing ethically informed best practices for the deployment of these technologies. I think this is something that police departments and policymakers need. For example, the Los Angeles Police Department received an outrageous amount of criticism for their use of predictive policing technology. They even stopped using it just this April.

We also plan to produce an edited volume, more geared toward academic and teaching purposes. It’s a collection of essays from different disciplinary perspectives on the ethics of predictive policing but also algorithmic technology more generally in criminal justice. I think that would be the first of its kind.

Tell us how your work will illuminate the issue of trust.

As a philosopher, I’m much more interested in understanding not just what sorts of things influence community trust, but what community trust consists of. What is it to trust law enforcement? What is community trust? How are criminologists using the term? And also, why does it matter? This is something that philosophers like to ask. Why does trust matter? There’s the instrumental sense in which you might think it matters. It matters because it helps the police do their job better. Citizens are more cooperative with police if they trust. But also, I think there’s a question about … whether trust matters in legitimizing the police. That is, whether an untrustworthy police department could even be a legitimate authority.

The nation is going through a major discussion about the role of police right now. To what extent does predictive policing belong in this conversation?

Obviously, concern about police brutality toward minorities is a conversation that has been unfolding for decades. Now, the momentum is picking up in a way that is never before seen. The cultural outlook or the larger social conversation about policing might be in the middle of taking a radical shift right now. Here’s what I would say about where policing technology fits into this: I do think that it is an important part of the conversation about police reform and about community police relationships. Technology can have a deterministic quality, where the kind of technology you adopt actually ends up shaping your priorities.

For instance, certain forms of predictive policing are much more fraught. For example, LAPD’s system. All it does is forecast when and where crime is going to occur. It doesn’t do anything to look at the underlying causes of crime or the underlying features of an area that would make it prone to crime. Because it doesn’t do this kind of in-depth diagnosis, it really does lend itself to a kind of pre-emptive patrolling approach to policing. What you need is police on the ground, in their cars driving around in these high-risk areas. You need them looking for crime.

Now, this is exactly the kind of police work that a lot of protesters, and a lot of community activists, oppose right now, the kind of police work where they’re out in the community—in black communities—looking for people committing crimes. Yet one could argue that this is exactly the kind of police work that this technology encourages. Once you’ve committed to using this technology, you have committed to this type of police work.

I think the discussion about the technology itself does matter, insofar as it actually shapes the way that police work is done.

Is that inevitable? Does it have to be that way?

There are alternative types of predictive policing technologies. There’s something called Risk Terrain Modeling. Like predictive policing, it forecasts high-risk places. But it’s a much more in-depth analysis of the features of a place that make it vulnerable to crime. It does a kind of big data analysis of the very specific features of a neighborhood where lots of crimes are committed. And then, it will provide a report to police sort of diagnosing the features of that place that make it risky. For example, places with dim lighting or abandoned buildings.

By adopting this type of technology, police departments might be more inclined to do what’s called a problem-oriented approach, where they actually look at the underlying causes of crime in an area and try to tackle those causes, say by demolishing abandoned buildings or replacing dim street lighting and so on.

Which form of the technology we adopt can really shape the form of policing that we end up adopting.

And that’s why it matters in the current conversation? 

Yes. I definitely see the nexus there. If departments are going to be adopting this type of technology, they’re essentially buying into a system of policing, right? If you buy into what LAPD bought into, you’re going to be sort of allocating resources based on where the crime has been and you’re not necessarily looking at root causes.

How does the technology relate to trust?

By understanding what trust is, what trust in public institutions is, we can begin to ask, “what are the features of technology that might undermine trust?” If being trustworthy requires that you act on behalf of certain individuals, it requires signaling to the people who are relying on you that you are reliable. If that’s a component of being trustworthy, then I think we do arrive at certain concerns about technology and the way technology can undermine trustworthiness. For example, transparency. If those algorithms are not transparent to the citizen, and not even transparent to the police officers using them, this really does undermine the ability to signal to citizens.

What I’m trying to do is explain in fundamental terms why it is we might have a

concern about trust in relation to the use of algorithms. I want to identify clearly and in fundamental terms what the concern about trust is. I think that’s a valuable contribution to the academic and public discussion.