TRUST SCHOLAR Q&A
Jasmine McNealy: It’s time to take a hard look at how we regulate data
Jasmine McNealy, Ph.D., is an associate professor in the Department of Telecommunication at the University of Florida and a lawyer. She studies law and policy around the impact of technology on information and communication. The interview took place in July 2020. It has been edited for length and style.
Tell us a little bit about yourself?
So I have a law degree and a Ph.D. in mass communication, with an emphasis in media law. My dissertation was what you might call an old-school topic. I was looking at what happens to journalists when they get unlawfully acquired information, what the courts say.
Stolen work papers, wiretap recordings—what happens when they are involved in the actual activity or when somebody mails it to them. Most of the cases dealt with privacy or privacy issues, moving away from actual paper into technical privacy, data privacy—using the latest devices for video recording, using computers, hacking into people’s accounts, stuff like that.
From there, I moved into the privacy of emerging technologies, technologies that collect a massive amount of information, and how that information is used in systems like machine learning and artificial intelligence. I was interested in what those things mean, particularly in marginalized communities and vulnerable communities, whether it is poor, disabled, people of color, whatever the case may be. How do we need to regulate data collection, artificial intelligence and the machine learning systems in order to adequately provide protection.
Tell us about your work with the Consortium on Trust in Media and Technology.
I’ve been thinking about the need to rethink data, data collection and usage to mitigate the harm that can come about. That’s a trust issue. It is about how policymakers—that includes legislators but also policymakers within organizations—conceptualize data and what they do with data.
All of these organizations, whether it is corporations, government agencies, organizations like hospitals and educational institutions, are involved in collecting data and they don’t always have a rationale. They don’t have holistic ways of dealing with that data. They don’t have solid ideas about why they need all that data, or any idea about the possible implications.
What happens if somebody hacks into the system? What happens if they do not secure correctly? What happens when it is an internal threat, something they decide to do. The U.N. High Commissioner for Refugees was taking biometric data for refugees and storing it. The question is, “why do you need biometrics data?” For what? Do you need that to resettle poor people? What happens when this data is used against them? What happens when someone uses your system for a purpose other than the good purpose you intend? This is very dangerous—and invasive. It has the possibility to harm people.
We need to question how we think about data, change our relationship to it, and rethink what organizations need to function.
You conduct research and study these issues. Tell us about your research.
Yes. I presented a draft of a paper at the Privacy Law Scholars Conference early this year related to data. People would like to think of data as property. I would say that is a really bad idea. One of the reasons is that property can be taken away from you. You can lose your property. Do you want people to lose themselves or lose the rights to themselves? Most would say it is a bad idea.
There’s a documentary called Mucho Mucho Amor. It is about Walter Mercado, a famous television psychic in Latin America. He disappeared for a long time. People thought he was dead, but he had lost the right to his own name. This is the kind of thing I am talking about.
You can lose the right to yourself, or how you are represented, even if that representation is being used in aggregate. Organizations can use our likes and dislikes to make predictions about us. “She is going to pay back her loan,” or “he is not a good hire because he does not fit,” those kinds of things. Instead, I say data should not be seen as property but we should look at it as a network representation or observation. Why? Because “network” means it is connected to other people and “representation” because it represents a snapshot at a certain point in time.
That changes how we have to deal with data and with law or policies, and how organizations have to deal with it. We are talking about humans. Humans deserve a higher level of care. They are not some abstract idea.
That is true. Tell us more about the concept of a network representation or observation.
Sure. Are you familiar with DNA mapping? They are data analysis tools like 23andMe and GEDmatch. That database was actually used by law enforcement to catch the Golden State Killer. The reason I bring that up is that the Golden State Killer himself was smart enough not to donate his own DNA. A relative donated her DNA
But her DNA is not just her. Her data is also the data of her relatives. Why? Because they share DNA. This was a valid network of observation. There are a whole lot of other people implicated.
Do you have recommendations from a policy standpoint?
We need to think about what network governance requires. It is both policy recommendations and a research agenda. We need to study more about how institutions affect representation.
I am looking at what I am calling information distribution organization. All these organizations that are using the technology connecting us, that we use all the time, that are deployed on us, that are involved in collecting or distributing or holding data. Think about the Googles, the Facebooks; Uber and Lyft and the other organizations. They are all distributing data. They are collecting. They are distributing. They are spying as well. We need to look at how these organizations have shaped the landscape.
I think that there is a need to have data privacy, and to settle data privacy law ASAP. That needs to happen. What is in the law should change business models. It would also change how people think about innovation
How far down this road is the federal government?
There have been proposals in both chambers of Congress. What they need is the political will to just say this is necessary. That is the case in Europe. Data transfers between the EU and the U.S. cannot just happen anymore. There has to be case-by-case permission. They are slowed. We rely on data transfers, so that can harm U.S. business. But the U.S. has not stepped up to do anything to demonstrate either to the EU or other countries that there is a data policy on a nationwide, federal level.
Let’s come back to the issue of trust. How does the current climate impact trust?
Yes. I think most people have very low trust in tech organizations. I think they barely trust the government—maybe local. Trust in local government is quite a bit higher than it is the federal level. But all of this is connected. I want to give policymakers the tools to shift the trust issue. The way to do that is creating policy and rethinking how they think about data collection and its use with respect to technology.
I am talking about legislators but also policy makers within an organization, the folks who create the privacy policies or the product managers who decide how much data is collected and how it is going to be used. They need to change how they are thinking about data and the possible impact on people and community. That is a trust issue. You are not going to get certain communities to trust you if you are deploying what are basically surveillance technologies that are not inclusive or actively cause harm
Rethinking the rules and the frameworks for those things, rethinking even how the government thinks about how people actually interact with technology, is important because the law is very normative.
Are you specifically thinking about low-income communities or minority communities?
I am concerned about marginalized and vulnerable communities. It is people of color—I am black—it is women, it is poor people, it is older people. It is people who are neglected, who are excluded or who are not thought about. People are designing technologies that sort out in a pathological or discriminatory way.
Let’s use the criminal justice system as an example. We know that our criminal justice system in the United States has a racism problem. If we have this huge data set, what is it going to say? It is biased data. It is data that is corrupted. If you train a system on that corrupt data, what kind of outcome do you think that it is going to have? Usually, it is going to be a biased system. The idea of a machine learning system and an AI is supposedly to take away the human bias, right? But if you use discriminatory data, that discriminatory system is still there.
You mentioned that trust in technology is low and yet so many people use Facebook, trading the rights to their data for the right to engage on the platform. Is that trade going to erode trust over time?
I do not think people willingly trade it. But it is hard to get away from. If you do not participate, you get news later. People are just trying to participate in the way that they can. It is not necessarily that people want to trade their data for connection, for community, for socializing. It is just that, this is a system that has not been regulated like it should. I think people do not trust Facebook or they trust Facebook for the purposes they need, but not for other purposes. They do not necessarily trust the organization, so they use work-around to still be able to connect and socialize.
Can companies like Facebook or Google regain trust or do you think that ship has sailed?
For huge companies like Facebook and Google, maybe one of the ways that they could possibly gain trust from people is to be broken up into smaller organizations.
To what extent do we have an obligation to protect our own data, as opposed to having the government step in and do it for us?
I do not believe in saying that people should just protect themselves. You cannot protect yourself when you have no power. These corporations, these organizations and the government have way more power than you and I could ever have.
People have a tendency to protect themselves. But even if you are not on Facebook—and you have never been on Facebook—they may have a document filed on you because you may have shown up in a photo that your friend posted. There is a huge power imbalance between an individual and this mega-corporation. The onus could never be on the individual. It does not work like that.