Using Dynamic Epistemic Logic (DEL) so an AI system proactively can make announcements to avoid undesirable future states based on the human's false belief

Previously we have investigated how an AI system can be proactive, that is, acting anticipatory and on own initiative, by reasoning on current and future states, mental simulation of actions and their effects, and what is desirable. In this micro-project we want to extend our earlier work doing epistemic reasoning. That is, we want to do reasoning on knowledge and belief of the human and by that inform the AI system what kind of proactive announcement to make to the human. As in our previous work, we will consider which states are desirable and which are not, and we too will take into account how the state will evolve into the future, if the AI system does not act. Now we also want to consider the human's false beliefs. It is not necessary and, in fact, not desirable to make announcements to correct each and any false belief that the human may have. For example, if the human is watching the TV, she need not be informed that the salt is in the red container and the sugar is in the blue container, while the human's belief is that it is the other way around. On the other hand, when the human starts cooking and is about to use the content of the blue container believing it is salt, then it is a relevant announcement of the AI system to inform the human what is actually the case to avoid undesirable outcomes. The example shows, that we need to research on not only what to announce but also when to make the announcement.

The methods we will use in this micro-project are knowledge-based, to be precise, we will employ Dynamic Epistemic Logic (DEL). DEL is a modal logic. It is an extension of Epistemic Logic which allows to model change in knowledge and belief of an agent herself and of other agents.

1 week of visit is planned. In total, 7,5 PMs are planned to work on the MP, that is, 1 week we work physically in the same place, the rest of the PMs we work together online.

Output

– Formal model
We expect to develop a formal model based on DEL and based on the
findings of J.Grosinger's previous work on proactivity. The model
enables an artificial agent to make announcements to the human to
correct the human's false belief and false belief about desirability
of future states in a proactive way. Being formal we can make general
definitions and propositions in the model and provide proofs about its
properties, for example, about which proactive announcements are
relevant and/or well-timed.

– Conference
We aim for a publication of our work at an international peer-reviewed
high-quality conference. Candidate conferences are AAMAS
(International Conference on Autonomous Agents and Multiagent
Systems), or if this is temporally infeasible, then IJCAI
(International Joint Conferences on Artificial Intelligence).

– Further collaboration
The MP can lead to further fruitful collaborations between the
applicants (and possibly, some of their colleagues additionally) as
the MP's topic is new and under-explored and all cannot be investigated
within one MP.

Project Partners

  • Örebro University, ORU, Jasmin Grosinger
  • Denmark Technical Unisersity, Thomas Bolander

Primary Contact

Jasmin Grosinger, Örebro University, ORU