Reigniting Trust

Human-Centered Computing

From accessible voting machines to unbiased AI for college admissions, Juan Gilbert helps create technology that could increase trust. 

Juan Gilbert

Juan E. Gilbert, Ph.D., is Chair of the Computer & Information Science & Engineering Department at the University of Florida, where he leads the Human-Experience Research Lab. His research is focused on human-centered computing, artificial intelligence, and machine learning. The interview has been edited for length and style.

 Please tell us a little bit about yourself and your work.

I started at Auburn University and I was there for nine years, went up the ranks to full professor, and then left and went to Clemson University. I was at Clemson for five years, then I left. I have been at University of Florida for six years.

My area of research is human-centered computing. It deals with people, technology, sometimes policy, culture and things like that. We take a very applied approach to research. We deal with real-world problems in real-world contexts, and we solve them by integrating people with technology. I do work in artificial intelligence, in particular dealing with bias or what I call “equitable AI,” looking at things that make AI better.

| Click here to read about Juan Gilbert’s research. |

Tell us how you got connected with the Consortium on Trust in Media and Technology and what you do there.

The Consortium is directly related to work that I do in human-centered computing, mostly in the area of elections technology and things like that. I do work in elections where we build technology to make it more secure, accessible, and usable. As you can imagine, in elections, trust is front and center. The work we do in that area is directly relevant to how we build trust in elections. I created an open system that has actually been used. It is the only one in the country that has been used in state, federal and local elections—ever.

In 2000, there was a presidential election and we had no clear winner. Congress said, “we have got to do something about this.” They enacted the Help America Vote Act, and within that was a provision saying that every voting precinct had to have at least one accessible voting machine. It made me think they got it wrong. They are creating a separate-but-equal system for voting, where people with disabilities have to do things differently. So we created what was called Prime III.

Prime III is a technical, open-source voting system. We created the first version in 2003. The design was what we will call universal design, meaning it was designed with the intent of broader use. People with disabilities and everybody else can use it. People who cannot see, cannot hear, cannot read, people without arms are all able to vote on the same machine as anyone else. It was one machine for everyone to vote on. So we created it, we got funding and then we did a pilot test and all kinds of studies to show that it works. The nation’s largest voting machine manufacturer created a machine, and it was modeled to our technology. So we changed voting in the United States.

You helped create a machine that literally any voter can use. Talk a little bit about the impact on trust.

Yes, that is a perfect question. So let me tell you about trust. When I created the technology, people would say, “Oh, this is a good idea. Who owns it?” I would say, “I do.” Then they said, “Oh, you do? How do you vote?” I started saying, “it is open-source, it is in the open domain,” and no one asks me how I vote anymore. That is one reason we never patented this technology or anything, because it was creating a trust scenario. If you own it, they have to understand your political standing. That was one way we avoided the whole trust issue in our technology, simply by making it open-source and saying, “Look, you do not trust me? Here, I can give you the code.” That made a world of difference. That was the biggest trust burden in that project that we overcame.

You said the minute that you made your system open-source, it eliminated the trust issue. Why do you think that is? What is it about this that makes people trust it if they can see under the hood?

Well, that is a good question. I think it has more to do with the fact that, number one, voting is high stakes. When the stakes are that high, you have a lot of motivation to influence. So that is where it starts, and when you say, “Look, I want to be a hundred percent transparent—it cannot be more transparent than if I give you the code,” that takes the issue away because I am not hiding anything. Part of working out trust is transparency, being able to not have anything hidden or suspect. We eliminated that and it helped.

Where do you think America stands in terms of trusting electoral outcomes?

Oh, there is a lot of distrust here. You see it is all over the place. There is this level of distrust now. If my person wins, the election is fine. If my person loses, the election is bad.

Let’s talk about AI. Tell us about the kind of work you have done in AI and how that relates to trust.

The biggest project I have right now is another long-term, older project from 2003. The United States Supreme Court ruled on a case involving the University of Michigan, saying they discriminated against applicants based on race. The Supreme Court came back and said, “You can use race, gender, national origin, in admissions decisions. However, you cannot give them preferential treatment by giving them points for that.” They said it had to be done in a holistic way. Well, there was no way to really do that holistically, and have evidence that you did not favor race or something like that.

What I did was I wrote software, an AI that can actually do that. It can do a holistic comparison between applications, and at the same time, increase diversity and maintain quality and standards. So, it is an AI that is used for admissions to increase diversity within a set of standards. We use it here at the University of Florida in our scholarships.

Explain how that beats the standard admissions process. What is it about the AI that makes a better process?

We actually did studies where we would go to a university and say, “Give me your applications from last year, and put them in a spreadsheet, and at the end of that spreadsheet add a column and tell me whether you made an offer to that person.” Then, what I would do is take that data and run it through Applications Quest—that is the tool—and compare the results. What I found in every single instance, and I ran over two dozen, is that the tool beats the admissions committee with respect to getting a more diverse recommendation, and doing it in a fraction of the time. It also had the same academic achievement level as the committee. The reason it can do that is because humans cannot see as many variables as the software can at once. So we can zero in on one and say, “Okay, sort the first sheet by race, and let me see what it looks like.” But this can see everything. So that is the difference.

Some automated systems have come under some scrutiny. Predictive policing is an example, where if the data that is input has a bias, then the output is going to have a bias. Is that a concern in this arena?

It is absolutely not a concern. We actually have a project in predictive policing as well. But here is why it does not matter in this case. You have to understand how these things work in different types of AI. Predictive policing is called a “supervised learning approach” for AI. What that means is, they take examples from the past and the system learns from that, builds a model, and then they put something new in it and it makes a decision based on what it has heard or seen in the past. That is how predictive policing works.

Applications Quest is what is called an “unsupervised learning approach.” What that means is it does not have a history. It takes the applicant pool at that time, and then it makes decisions based on that, and does not use any historical reference. So there is no bias based on historical context. It does not have that issue. It uses what it is given to make decisions, meaning the applications.

Tell us a little bit more about how this system impacts trust. Is it your sense that AI can bring a higher level of trust to the process?

There are multiple trust factors here. The first one is, when I started the project and I walked into an admissions office, people thought I was the devil. They said, “Oh, look—he has come to take your job. He has the technology that is going to replace you.” The truth of the matter is that this is not that. It is no different than people using a spreadsheet. It is powered to enable you to do your job more accurately. So I have to get over that hurdle, and create that trust.

Then, you get to trust in the decision. Well, Applications Quest is another thing that can give trust, but here is how it is a little different. Schools are being sued because people say they gave preferential treatment to minority candidates in admission. People say, “How can you prove to me that the process you used did not favor this person’s race?” Trust breaks down because you cannot provide evidence that you did not do something wrong.

With this tool, I actually can. I can prove to you that the tool did not give preference by race or any other attribute in question to any applicant. I am hoping that this is going to create trust because I can provide evidence that it is not biased. But here is a hurdle that I am dealing with: I have technology that can give you evidence that it is not biased, but sometimes people do not want that. They like the idea of having bias because they can argue for themselves better that way. So it is a double-edged sword. To answer your question, this can help alleviate issues of trust because it can provide evidence that bias was not used. However, that does not always make the plaintiff feel any better.

Tell us a bit about the general relationship between technological solutions and trust. Can technology increase trust universally?

Not really. I would not say universally, no. It is too easy to scrutinize technology, particularly if you are ignorant of the technology. Mail-in voting is accused of being fraudulent because people do not understand how it works. It is easy to make those claims and create fear and distrust. The same thing applies to any technology they do not understand. It is easy to make claims.

There are scenarios where it can increase trust. When technology can be used and understood, then you have an opportunity to create trust. There has got to be a level of understandability, usability and comprehension. If you could create a technology that creates world peace, but no one could use it, then it is useless. The easiest way to have distrust is when something is hard to use. That is where you can really create distrust.

So technology can increase trust but only in very specific circumstances where usability and understanding are high.

Right, that is what I would say. You got to have usability and understanding across all the relative stakeholders. That is why we do human-centered computing. That is exactly why. It has got to be usable. If it is not usable, but it can solve a problem, it is going to be useless.

Do you think that the technology sector as a whole needs to put more emphasis on usability?

Yes, and you will see more of that. We definitely have to explain it, make it usable, and if you can make it user friendly in the sense that it has a system that tells you how to use it—so it is self-explanatory—those are the greatest solutions.

Do you think that people still inherently distrust technology, or do you think that that is changing?

I think there is some change. I give a lot of credit to the mobile phone and apps. Inherently, people do get a little cautious about new technology. That is just natural. It also depends on the context. I am working with politics and admissions decisions. These are things that have high stakes. People really, really, really care about the outcomes. In those kinds of things, you are going to have some distrust. How do you get rid of that? That is where the technology piece can help eliminate some of that distrust.