Extending Inverse Reinforcement Learning to elicit and exploit richer expert feedback by leveraging the learner’s beliefs.

Interactive Machine Learning (IML) has gained significant attention in recent years as a means for intelligent agents to learn from human feedback, demonstration, or instruction. However, many existing IML solutions primarily rely on sparse feedback, placing an unreasonable burden on the expert involved. This project aims to address this limitation by enabling the learner to leverage richer feedback from the expert, thereby accelerating the learning process. Additionally, we seek to incorporate a model of the expert to select more informative queries, further reducing the burden placed on the expert.

Objectives:
(1) Explore and develop methods for incorporating causal and contrastive feedback, as supported by evidence from psychology literature, into the learning process of IML.
(2) Design and implement a belief-based system that allows the learner to explicitly maintain beliefs about the possible expert objectives, influencing the selection of queries.
(3) Utilize the received feedback to generate a posterior that informs subsequent queries and enhances the learning process within the framework of Inverse Reinforcement Learning (IRL).

The project addresses several key aspects highlighted in the workpackage on Collaboration with AI Systems (W1-2). Firstly, it focuses on AI systems that can communicate and understand descriptions of situations, goals, intentions, or operational plans to establish shared understanding for collaboration. By explicitly maintaining beliefs about the expert’s objectives and integrating causal and contrastive feedback, the system aims to establish a common ground and improve collaboration.
Furthermore, the project aligns with the objective of systems that can explain their internal models by providing additional information to justify statements and answer questions. By utilizing the received feedback to generate a posterior and enhance the learning process, the system aims to provide explanations, verify facts, and answer questions, contributing to a deeper understanding and shared representation between the AI system and the human expert.
The project also demonstrates the ambition of enabling two-way interaction between AI systems and humans, constructing shared representations, and allowing for the adaptation of representations in response to information exchange. By providing tangible results, such as user-study evaluations and methods to exploit prior knowledge about the expert, the project aims to make measurable progress toward collaborative AI.

Output

(1) Identification and development of potential informative feedback mechanisms that are more user-friendly, with a focus on determining the appropriate form of queries.
(2) User-study evaluation results that measure the correctness of the information provided by the human and assess the cognitive overhead involved.
(3) Methods to exploit prior knowledge about the expert to improve learning and reduce the burden placed on them, specifically in terms of how to query.
(4) Integration of richer feedback from the expert, including causal knowledge and contrastive information, into the learning process.
(5) Publication of a peer-reviewed paper in a competitive venue, presenting the research findings and contributions to the field.
(6) Creation of a GitHub repository containing all necessary materials to replicate the results and support further research endeavors.

Project Partners

  • ISIR, Sorbonne University, Silvia Tulli
  • Colorado State University, Sarath Sreedharan
  • ISIR, Sorbonne University, Mohamed Chetouani

Primary Contact

Silvia Tulli, ISIR, Sorbonne University