Building AI machines capable of making decisions compliant with ethical principles is a challenge that needs to be faced in the direction of improving reliability and fairness in AI.
This micro-project aims at combining argument mining and argumentation-based reasoning to ensure ethical behaviors in the context of chatbot systems. Argumentation is a powerful tool for modeling conversations and disputes. Argument mining is the automatic extraction of arguments from natural language inputs, which could be applied both in the analysis of user input and in the retrieval of suitable feedbacks to the user. We aim to augment classical argumentation frameworks with ethical and/or moral constraints and with natural language interaction capabilities, in order to guide the conversation between chatbots and humans in accordance with the ethical constraints
- Consiglio Nazionale delle Ricerche (CNR), Bettina Fazzinga
- Università di Bologna (UNIBO), Paolo Torroni
Primary Contact: Bettina Fazzinga, CNR
Main results of micro project:
We propose a general-purpose dialogue system architecture that leverages computational argumentation and state-of-the-art language technologies to implement ethics by design.
In particular, we propose a chatbot architecture that relies on transparent and verifiable methods and is conceived so as to respect relevant data protection regulations. Importantly, the chatbot is able to explain its outputs or recommendations in a manner adapted to the intended (human) user.
We evaluate our proposal against a covid-19 vaccine case study.
Contribution to the objectives of HumaneAI-net WPs
In the context of information-providing chatbots and assistive dialogue systems, especially in the public sector, ethics by design requires trustworthiness, transparency, explainability, correctness, and it requires architectural choices that take data access into account from the very beginning.
The main features of our chatbot architecture, with respect to the objectives of HumaneAI-net WP5, are
– an architecture for AI dialogue systems where user interaction is carried out in natural language, not only for providing information to the user, but also to answer user queries about the reasons leading to the system output (explainability).
– a transparent reasoning module, built on top of a computational argumentation framework with a rigorous, verifiable semantics (transparency, auditability).
– a modular architecture, which enables an important decoupling between the natural language interface, where user data is processed, and the reasoning module, where expert knowledge is used to generate outputs (privacy and data governance).
- Publication: An Argumentative Dialogue System for COVID-19 Vaccine Information – Andrea Galassi