Original research is conducted in all areas of trust in media and technology, including a focus on such areas as the role of social media, relationship between cognitive and emotional influences and the relationship between humans and technology. The Consortium conducts original research and serve as an aggregator of research in this arena.
We intend to encourage and support a new generation of scholars and leaders through infrastructure and pilot funding for research. These strategic initiatives lay the foundation for an ecosystem of collaboration and mutual discovery that benefits researchers whose work already touches on issues and circumstances related to trust. Research funds will reward collaboration.
We also intend to build a curriculum that will develop the next generation of scholars and professionals who are focused on trust, and will encourage undergraduate research as well as research at the master’s, doctoral and post-doctoral levels. New graduates might continue their work in either academia or industry, engendering a sensibility to matters of trust and trustworthiness that are essential to both business success and a healthy democracy.
We will organize mini-symposia to focus on specific aspects of trust scholarship and its translation to societal application. Once a year, we will organize the Summit on Trust, a high-visibility event that will serve as a forum for the exchange of ideas and solutions among the Consortium members and broader academic, industry and community stakeholders.
Here are just a few examples of the more than 100 research projects related to trust that have been conducted at the University of Florida over the last six years. Each of these researchers are enthusiastic participants in the Consortium concept and their work is representative of other investigations under way at UF:
Kevin Butler, Ph.D., Computer & Information Science & Engineering 
As news sources become increasingly decentralized and trust in existing organizations erodes, we are left with difficult questions about how to re-establish trustworthiness. Dr. Butler’s research focuses on how to establish the trustworthiness of data, and how to assure that it can remain trustworthy from the time it is generated to when it is used as a fact during reportage. This research could be helpful for the news media: being able to demonstrate the trustworthiness of data from the time it is generated to when it is processed and finally reported on, and being able to establishing a verifiable proof of this trustworthiness, could provide a formal means of demonstrating trust.
Juan E. Gilbert, Ph.D., Computer & Information Science & Engineering 
Dr. Gilbert is conducting research on the design, implementation and evaluation of human-centered computing (HCC) systems. His research contexts are voting, automobiles/transportation and others. Within the voting and elections context, his team is interested in designing, building and evaluating technologies for their accuracy, security, usability, accessibility and privacy. In the automotive context, their work aims to address driver distraction, privacy and security. Additionally, they work on AI or machine learning algorithms in specific contexts such as hiring and admissions decisions. They are interested in privacy and bias in machine learning and AI algorithms. Finally, they work on designing persuasive interfaces that influence use. These interfaces may violate privacy, so they are interested in privacy preserving interfaces.
Myiah Hutchens, Ph.D., Department of Public Relations 
Part of understanding what individuals judge as trustworthy, especially in an emerging media context, requires that we understand how individuals process information. Dr. Hutchens’s research focuses explicitly on understanding how people respond to political information they disagree with in a variety of contexts. This deals with questions such as what is the variety of information that people encounter in social media contexts, and how that information shapes our reactions to different online newsoutlets. Her research has examined the consequences of this information seeking and processing on our levels of political polarization and knowledge, both of which are inextricably linked to perceptions of trustworthiness.
Ben Lok, Ph.D., Computer & Information Science & Engineering 
Dr. Lok’s research group studies how people react to virtual environments and virtual humans. With computing technologies capable of presenting realistic virtual places and virtual people that are authorable, it will be critical to study, educate, and shape the impact that virtual technologies have on trust in media and technology. In the near future, it will be possible to generate media that present a life-like version of any person saying and doing anything. How this capability will shape society and how develop trust in the media presented to the public will be profound.
Frank Waddell, Ph.D., Department of Journalism
Dr. Waddell’s program of research is exploring how the use of automated news affects perceptions of news bias, particularly for news that is politically polarized. Results from this program of work have begun to reveal that skeptical audiences are more trusting of news when journalists use automation to supplement their work. Automated journalism may thus hold promise for re-building trust in journalism, especially among audiences who are prone to distrust news that is inconsistent with their personal or political beliefs.
Daisy Wang, Ph.D., Computer & Information Science & Engineering 
Dr. Wang’s current interest related to Trust in Media and Technology is around developing technology to provide transparency to counter bias in data and algorithms. She currently has two related projects:
- Query Answering with Alternatives: Generating multiple hypotheses to answer queries over a knowledge graph automatically constructed from news and social media, which contains alternative interpretations of events, situations, and trends from sometimes noisy, biased, subjective, conflicting, and potentially deceptive information environments. (DARPA AIDA)
- Explainable Prediction Analytics: Perform prediction and analytics such as link prediction using explainable machine learning models over knowledge