Human-robot interaction and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. Human-robot interaction has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This paper presents initial steps in closing the gaps between these fields. Using insights and experimental findings from interdependence theory and social psychology, this work analyzes a large game theory data set. It demonstrates that the strongest predictors for a wide variety of trust interactions are our newly proposed and validated metrics of commitment and trust. These metrics better capture social ‘over-trust’ than either rational or normative psychological reasoning, as often proposed in game theory. They also are better situated to explain ‘over-trust’ in human-robot interaction than normative reasoning alone. This work further explores how interdependence theory –with its focus on commitment, power, vulnerability, and calibration– addresses many of the proposed underlying constructs and antecedents within human-robot trust, shedding new light on key differences and similarities that arise when robots replace humans in trust interactions.