The project aims to develop a Framework for multimodal & multilingual conversational agents focus on. The framework is based on hierarchical levels of abilities:

– Reactive(sensori-motor) Interaction: Interaction is tightly-coupled perception-action where actions of one agent are immediately sensed and interpreted as actions of the other. Examples include greetings, polite conversation and emotional mirroring

– Situated (Spatio-temporal) Interaction Interactions are mediated by a shared model of objects and relations (states) and shared models for roles and interaction protocols.

– Operational Interaction Collective performance of tasks.

– Praxical Interaction Sharing of knowledge about entitles, relations, actions and tasks.

– Creative Interaction Collective construction of theories and models that predict and explain phenomena.

On this microproject we focus on the 2 first levels (Reactive & Situational) and design the global framework architecture. The work performed in this project will be demontrated in a PoC.

Output

OSS Framework (Level 1 and 2)

Project Partners:

  • Università di Bologna (UNIBO), Paolo Torroni

 

Primary Contact: Eric Blaudez, THALES

Results Description

Our work considers the applicability of the MR-CKR framework to the task of generating challenging inputs for a machine learning model.

Here, MR-CKR is a symbolic reasoning framework for Multi-Relational Contextual Knowledge Repositories that we previously developed. Contextual means that we can (defeasibly) derive different conclusions in different contexts given the same data. This means that conclusions in one context can be invalidated in a more specific context. Multi-Relational means that a context can be "more specific" with respect to different independent aspects, such as regionality or time.

The general idea of generating challenging inputs for a machine learning model is the following: We have limited data on which we can train our model, thus, it is like that the model does not cover all eventualities or does not have enough data in specific contexts to lead to the correct result. Obtaining more data is often very difficult or even infeasible however.

We introduce a new approach to solving this problem. Namely, given a set of diagnoses describing contexts in which the model performs poorly we generate new inputs that are (i) in the described contexts and (ii) as similar as possible to a given starting input. (i) allows us to train the network in a targeted manner by feeding it exactly those cases that it struggles with. (ii) ensures that the new input only differs from the old one in those aspects that make the new input problematic for the model. Thus, allowing us to teach the model to recognize aspects relevant for the answer.

This fits very well with the capabilities of MR-CKR: on the one hand, we have different contexts in which the inputs need to be modified to suit a different diagnosis of failure of the model. On the other hand, we can exploit the different relations by having one relation that specifies that inputs are more modifiable in one context than another and another relation that describes whether one diagnosis is a special case of another. Additionally, it allows us to incorporate global knowledge such that we can only modify inputs in such a manner that the result is still "realistic", i.e., satisfies the axioms in the global knowledge.

In this work, we provide a prototype specialized to generating similar and problematic scenes in the domain of Autonomous Driving.

This work fits well into Task 1.1 of WP1: "Linking symbolic and subsymbolic learning", since we use a symbolic approach to enable the use of domain knowledge in order to advance the performance of a subsymbolic model.

Furthermore, it also loosely fits into Task 1.5 of WP1: "Quantifying model uncertainty", since we can quantify how similar the generated new inputs are to the original ones.

Publications

ArXiv Technical Report:
Loris Bozzato, Thomas Eiter, Rafael Kiesel and Daria Stepanova (2023).
Contextual Reasoning for Scene Generation (Technical Report).
https://arxiv.org/abs/2305.02255

Links to Tangible results

Technical Report on formalization: https://arxiv.org/abs/2305.02255
Prototype implementation: https://github.com/raki123/MR-CKR
(The prototype has been also submitted as an AI Asset to the AI4Europe)