Sarah Walsh successfully defends her dissertation

November 2, 2023. Sarah Walsh successfully defends her Ph.D. Dissertation titled ‘Shared Mental Models in Human-Agent Team Decision Making’ 
Abstract:
To effectively utilize AI-decision support tools, a human-AI team (HAT) must be able to maintain a shared understanding of the environment, team goals, and problem constraints. Otherwise, the human-AI team performance will not outperform individual performance. In this dissertation, we investigate an approach to the interaction between humans and agents via a Shared Mental Model (SMM). SMMs provide meaningful information to and about both humans and AI-systems. Our goal is to demonstrate that joint human-AI systems which include a SMM increase accuracy and efficiency and reduce dissonance between human and AI systems in decision-making tasks. This work is comprised of two primary research thrusts: 1) developing methods to build accurate and useful SMMs in human-agent teams and 2) developing metrics to quantify the impact of partial and complete SMMs in HATs. We assess performance of these reduced-order SMMs by varying correctness of task mental models, and completeness of team mental models to determine if the onus of collaboration should primarily fall on the user, the AI agent, or be shared in HATs. This dissertation contributes three primary findings: 1) SMM in HATs improve decision making accuracy and speed, 2) Shifting the burden of team strategy between the user and AI-agent through unidirectional team mental models leads to both positive and negative impacts on team performance, while bi-directional team models improve team performance, and 3) AI-decision support using novel methodology can infer user decision making tendencies, abilities, and preferences. These and associated findings are then summarized as design recommendations for implementing SMMs in HAT dyads.

Congrats Dr. Walsh!