Shared Human-AI Mental Models in Flight Planning

Shared Mental Models play a crucial role in a number of different aspects of teamwork. Shared mental models are more than simply an understanding of the state of the exterior world, they also incorporate information about who within a team has both the ability to perform certain tasks as well as the responsibility to see that they are performed correctly. Significant research has been conducted on shared mental models developed between human-centric teams, where the role of computational agents is relegated to low level algorithms or simple automated systems.  Less is understood about the importance of, and the mechanisms necessary to create and maintain a shared mental model between humans and more sophisticated automation, i.e. autonomy and particularly autonomy found in learning agents such as those powered by AI or Machine Learning.  In such cases the shared mental model must exist in both the human mind and in the agent’s memory structures.  It must be updatable and changes must be communicated in both directions.  And it must be used by the autonomous agent to reason and make decisions.

This research will focus primarily on understanding the kinds of mechanisms by which a shared mental model could be created and changes passed to a human from an autonomous agent.  While we will make suggestions for how to incorporate information back into the autonomous agent, we will not prototype or demonstrate this capability.  The significance of this research includes 1) development of understanding of how shared mental models can improve human-autonomy teaming by facilitating collaborative judgment, and 2) informing the wider set of human-autonomy interaction by building our knowledge of the role that shared mental models play.  This research will inform the wider set of  human-autonomy interaction by increasing our knowledge of the role that shared mental models can facilitate collaborative judgment and shared situational awareness. This knowledge is especially needed as many domains begin to transfer the role of the human from being “in-the-loop”, i.e. a critical pathway in the decision making and control of a system, to being “on-the-loop”, i.e. removed from the critical control pathway, as more real-time decisions are transferred to AI and other similar algorithms in the fields of process control, command and control, aviation, and others.

Map of Cognitive Engineering Center

Cognitive Engineering Center (CEC)
Georgia Institute of Technology
270 Ferst Drive
Atlanta GA 30332-0150