This MP studies the problem of how to alert a human user to a potentially dangerous situation, for example for handovers in automated vehicles. The goal is to develop a trustworthy alerting technique that has high accuracy and minimum false alerts. The challenge is to decide when to interrupt, because false positives and false negatives will lower trust. However, knowing when to interrupt is hard, because you must take into account both the driving situation and the driver's ability to react given the alert, moreover this inference must be done based on impoverished sensor data. The key idea of this MP is to model this as a partially observable stochastic game (POSG), which allows approximate solutions to a problem where we have two adaptive agents (human and AI). The main outcome will be an open library called COOPIHC for Python, which allows modeling different variants of this problem.

Output

COOPIHC library (Python)

Paper (e.g,. IUI’23 or CHI’23)

Project Partners:

  • Aalto University, Antti Oulasvirta
  • Centre national de la recherche scientifique (CNRS), Julien Gori

Primary Contact: Antti Oulasvirta, Aalto University

Results Description

When is an opportune moment to alert a human partner? This question is hard, because the beliefs and cognitive state of the human should be taken into account when choosing if/when to alert. Every alert is interruptive and bears a cost to the human. However, especially in safety-critical domains, the consequences of not alerting can be infinitely negative. In this work, we formulate the optimal alerting problem based on the theory of partially observable stochastic games. The problem of the assistant and the problem of the user are formulated and solved as a single problem in POSG. We presented first results using a gridworld environment, comparing different types of alerting agents and a roadmap for future work using realistic driver simulators. These models can inform handover/takeover decisions in semi-automated vehicles. The results were integrated into COOPIHC, a multiagent solver for interactive AI.

Publications

Hossein, Firooz (2022). AI-Assisted for Modeling Multitasking Driver. Thesis submitted for examination for the degree of Master of Science in Technology at Aalto University.

Links to Tangible results

The MP contributed to a computational theory called POSG, a multi-agent framework for human-AI interaction developed between CNRS and Aalto University:
https://jgori-ouistiti.github.io/CoopIHC/