Models, Metrics, and a Mathematics of Interactional Trust for Humans and Automation

At the very core of technological acceptance is human-machine trust and its fragility. In this project we have proposed and are testing models of human-machine trust that includes its antecedents such as faith in technology, familiarity, and situation awareness. Our research also incorporates expectations that the automation will be cooperative and perform ethically, legally, and abide by norms, drawing from many fields from human-robot interaction to game theory, social psychology, and management and information science.  We are running experiments to demonstrate how this view of human-robot interaction (HRI) trust may account for trust differences between humans and robots, including trust’s fragility. We have also demonstrated how trust expectations translate into actual decisions, through a mathematics of interdependence and commitment.  Continuing this line of inquiry and validating out trust models, new trust survey instruments in human-subject studies will be tested in the lab and in the field.

Map of Cognitive Engineering Center

Cognitive Engineering Center (CEC)
Georgia Institute of Technology
270 Ferst Drive
Atlanta GA 30332-0150