NASA Authority & Autonomy 2010-2013

AuthorityAutonomy

NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Such analyses must consider the conditions that could impact system safety including the environment, human behavior and operational procedures, methods of collaboration and organization structures, policies and regulations. 

              Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation’s design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy. To make the analyses using both techniques more efficient, common representations for model components, methods for identifying the appropriate safety properties, and techniques for determining the set of analyses to run are required.

This project is performed in association with Dr. Ellen Bass of Drexel University, Dr. Elsa Gunter of the University of Illinios, and John Rushby of SRI.