This MP studies the problem of how to alert a human user to a potentially dangerous situation, for example for handovers in automated vehicles. The goal is to develop a trustworthy alerting technique that has high accuracy and minimum false alerts. The challenge is to decide when to interrupt, because false positives and false negatives will lower trust. However, knowing when to interrupt is hard, because you must take into account both the driving situation and the driver's ability to react given the alert, moreover this inference must be done based on impoverished sensor data. The key idea of this MP is to model this as a partially observable stochastic game (POSG), which allows approximate solutions to a problem where we have two adaptive agents (human and AI). The main outcome will be an open library called COOPIHC for Python, which allows modeling different variants of this problem.

Output

COOPIHC library (Python)

Paper (e.g,. IUI’23 or CHI’23)

Project Partners:

  • Aalto University, Antti Oulasvirta
  • Centre national de la recherche scientifique (CNRS), Julien Gori

Primary Contact: Antti Oulasvirta, Aalto University