Develop AI interactive grounding capabilities in collaborative tasks using a game-based mixed reality scenario that require physical actions.

The project addresses research on interactive grounding. It consists of the development of an Augmented Reality (AR) game, using HoloLens, that supports the interaction of a human player with an AI character in a mixed reality setting using gestures as the main communicative act. The game will integrate technology to perceive human gestures and poses. The game will bring about collaborative tasks that need coordination at the level of mutual understanding of the several elements of the required task. Players (human and AI) will have different information about the tasks to advance in the game and need to communicate that information to their partners through gestures. The main grounding challenge will be based on learning the mapping between gestures to the meaning of actions to perform in the game. There will be two levels of gestures to ground, some are task-independent while others are task-dependent. In other words, besides the gestures that communicate explicit information about the game task, the players need to agree on the gestures used to coordinate the communication itself, for example, to signal agreement or doubt, to ask for more information, or close the communication. These latter gesture types can be transferred from task to task within the game, and probably to other contexts as well.
It will be possible to play the game with two humans and study their gesture communication in order to gather the gestures that emerge: a human-inspired gesture set will be collected and serve the creation of a gesture dictionary in the AI repertoire.
The game will provide different tasks of increasing difficulty. The first ones will ask the players to perform gestures or poses as mechanisms to open a door to progress to the next level. But later, in a more advanced version of the game, specific and constrained body poses, interaction with objects, and the need to communicate more abstract concepts (e.g., next to, under, to the right, the biggest one, …) will be introduced.
The game will be built as a platform to perform studies. It will support studying diverse questions about the interactive grounding of gestures. For example, we can study the way people adapt to and ascribe meaning to the gestures performed by the AI agent, we can study how different gesture profiles influence the people’s interpretation, facilitate grounding, and have an impact on the performance of the tasks, or we can study different mechanisms on the AI to learn its gesture repertoire from humans (e.g., by imitation grounded on the context).
We see this project as a relevant contribution to the upcoming Macro Project on Interactive Grounding, and we would like the opportunity to join the MP later. Our focus is on the grounding based on gestures being critical in certain scenarios. The setting can include language if vocalization is allowed and can be heard. Our game scenarios are simple and abstract and can be the basis for realistic ones.

Output

A game that serves as a platform for studying grounding in the context of collaborative tasks using gestures.
A repertoire of gestures to be used in the communication between humans and AI in a collaborative task that relies on the execution of physical actions. We will emphasize the gestures that can be task-independent.
The basis for an AI algorithm to ground gestures to meaning adapted to a particular user.
One or two papers, describing the platform and a study with people.

Project Partners

  • Instituto Superior Técnico (IST), Rui Prada
  • Eötvös Loránd University, András Lőrincz
  • DFKI Lower Saxony, Daniel Sonntag
  • CMU, László Jeni

Primary Contact

Rui Prada, Instituto Superior Técnico (IST)