SciNoBo: An AI system collaborating with Journalists in Science Communication (resubmission)

Science communication conveys scientific findings and informs about research developments the general public, policymakers and other non-expert groups raising interest, trust in science and engagement on societal problems (e.g., United Nations Sustainable Development Goals). In this context, evidence-based science communication isolates topics of interest from the scientific literature, frames the relevant evidence and disseminates the relevant information to targeted non-scholarly audiences through a wide range of communication channels and strategies.

The proposed microproject (MP) focusses on science journalism and the public outreach on scientific topics in Health and Climate Change. The MP will bring together and enable interactions of science communicators (e.g., science journalists, policy analysts, science advisors for policymakers, other actors) with an AI system, capable of identifying statements about Health and Climate in mass media, grounding them on scientific evidence and simplifying the language of the scientific discourse by reducing the complexity of the text while keeping the meaning and the information the same.

Technologically, we plan to build on our previous MP work on neuro-symbolic Q&A (*) and further exploit and advance recent developments in instruction fine-tuning of large language models, retrieval augmentation and natural language understanding – specifically the NLP areas of argumentation mining, claim verification and text (ie, lexical and syntactic) simplification.

The proposed MP addresses the topic of “Collaborative AI” by developing an AI system equipped with innovative NLP tools that can collaborate with humans (ie, science communicators -SCs) communicating statements on Health & Climate Change topics, grounding them on scientific evidence (Interactive grounding) and providing explanations in simplified language, thus, facilitating SCs in science communication. The innovative AI solution will be tested on a real-world scenario in collaboration with OpenAIRE by employing OpenAIRE research graph (ORG) services in Open Science publications.

Workplan
The proposed work is divided into two phases running in parallel. The main focus in phase I is the construction of the data collections and the adaptations and improvements needed in PDF processing tools. Phase II deals with the development of the two subsystems: claim analysis and text simplification as well as their evaluation.

Phase I
Two collections with News and scientific publications will be compiled in the areas of Health and Climate. The News collection will be built based on an existing dataset with News stories and ARC automated classification system in the areas of interest. The second collection with publications will be provided by OpenAIRE ORG service and further processed, managed and properly indexed by ARC SciNoBo toolkit. A small-scale annotation is foreseen by DFKI in support of the simplification subsystem.

Phase II
In phase II, we will be developing/advancing, finetuning and evaluating the two subsystems. Concretely, the “claim analysis” subsystem encompasses (i) ARC previous work on “claim identification”, (ii) a retrieval engine fetching relevant scientific publications (based on our previous miniProject), and (iii) an evidence-synthesis module indicating whether the publications fetched and the scientists’ claims therein, support or refute the News claim under examination.
DFKI will be examining both lexical and syntax-based representations, exploring their contribution to text simplification and evaluating (neural) simplification models on the Eval dataset. Phase II work will be led by ARC in collaboration with DFKI and OpenAIRE.

Ethics: AI is used but without raising ethical concerns related to human rights and values.

(*): Combining symbolic and sub-symbolic approaches – Improving neural QA-Systems through Document Analysis for enhanced accuracy and efficiency in Human-AI interaction.

Output

Paper(s) in Conferences:
We plan to submit at least two papers about the “claim analysis” and the “text simplification” subsystems.

Practical demonstrations, tools:
A full-fledged demonstrator showing the functionality supported will be available (expected at the last month of the project).

Project Partners

  • ILSP/ATHENA RC, Haris Papageorgiou
  • German Research Centre for Artificial Intelligence (DFKI), Julián Moreno Schneider
  • OpenAIRE, Natalia Manola

Primary Contact

Haris Papageorgiou, ILSP/ATHENA RC

Enhancing Non-Anthropomorphic Robots: Exploring Social Cues for Seamless Human-Robot Interaction

Various types of robots are entering many contexts of life, such as homes, public spaces and factories. Social robots interact with people using conventions and interaction modalities that are prevalent in human-human interaction. While many social robots are anthropomorphic, non-anthropomorphic robots, such as vacuum cleaners, lawn mowers, and barista robots, are getting more common for everyday tasks. The purpose of this research is to explore how people’s interaction with non-anthropomorphic robots can benefit from human-like social cues.

The main research question of this micro-project is: What kind of social cues can be used for human-robot interaction (HRI) with non-anthropomorphic robots? Social interactions will be supported by robots’ gestures, sounds and visual cues. Different robotic parts such as arms or antennas can be designed to extend expressivity. The designs will be tested with a specified scenario and user group, such as factory workers or elderly people. The exact scenario will be defined at the start of the project. Two parallel studies will be conducted, one in LMU and one in ITS. In LMU, a prototype or mock-up will be designed and built. In ITS, one of their robots will be used to test the same scenario. The evaluations will be explorations of different social cues, and based primarily on qualitative user evaluation.

The proposed research aligns well with the WP3 focus on human-AI collaboration, more specifically by investigating human-centered techniques for common grounding between people and robots. The research will lead to novel understanding of how robots can give social cues to improve multimodal human-robot interaction in various usage contexts.

Output

– Prototype/mock-up of a socially supportive non-anthropomorphic robot
– User study report, with insights of the types of robotic parts and social cues that improve human-robot interaction in the selected scenarios
– Paper submitted to a conference or journal of HRI

Project Partners

  • Ludwig-Maximilians-Universität München (LMU), Albrecht Schmidt
  • Instituto Superior Técnico (IST), Ana Paiva
  • Tampere University, Finland, Kaisa Väänänen

Primary Contact

Albrecht Schmidt, Ludwig-Maximilians-Universität München (LMU)

This project aims to make modern cognitive user models and collaborative AI tools more applicable by developing generalizable amortization techniques for them.

In human-AI collaboration, one of the key difficulties is establishing a common ground for the interaction, especially in terms of goals and beliefs. In practice, the AI might not have access to this necessary information directly and must infer it during the interaction with the human. However, training a model to support this kind of inference would require massive collections of interaction data and is not feasible in most applications.
Modern cognitive models, on the other hand, can equip AI tools with the necessary prior knowledge to readily support inference, and hence, to quickly establish a common ground for collaboration with humans. However, utilizing these models in realistic applications is currently impractical due to their computational complexity and non-differentiable structure.
This micro-project contributes directly to the development of collaborative AI by making cognitive models practical and computationally feasible to use thus enabling efficient online grounding during interaction. The project approaches this problem by developing amortization techniques for modern cognitive models and for merging them in collaborative AI systems.

Output

A conference paper draft that introduces the problem, a method, and initial findings.

Project Partners

  • Delft University of Technology, Frans Oliehoek

Primary Contact

Samuel Kaski, Delft University of Technology

SciNoBo: An AI system collaborating with Journalists in Science Communication (resubmission)

Science communication conveys scientific findings and informs about research developments the general public, policymakers and other non-expert groups raising interest, trust in science and engagement on societal problems (e.g., United Nations Sustainable Development Goals). In this context, evidence-based science communication isolates topics of interest from the scientific literature, frames the relevant evidence and disseminates the relevant information to targeted non-scholarly audiences through a wide range of communication channels and strategies.

The proposed microproject (MP) focusses on science journalism and the public outreach on scientific topics in Health and Climate Change. The MP will bring together and enable interactions of science communicators (e.g., science journalists, policy analysts, science advisors for policymakers, other actors) with an AI system, capable of identifying statements about Health and Climate in mass media, grounding them on scientific evidence and simplifying the language of the scientific discourse by reducing the complexity of the text while keeping the meaning and the information the same.

Technologically, we plan to build on our previous MP work on neuro-symbolic Q&A (*) and further exploit and advance recent developments in instruction fine-tuning of large language models, retrieval augmentation and natural language understanding – specifically the NLP areas of argumentation mining, claim verification and text (ie, lexical and syntactic) simplification.

The proposed MP addresses the topic of “Collaborative AI” by developing an AI system equipped with innovative NLP tools that can collaborate with humans (ie, science communicators -SCs) communicating statements on Health & Climate Change topics, grounding them on scientific evidence (Interactive grounding) and providing explanations in simplified language, thus, facilitating SCs in science communication. The innovative AI solution will be tested on a real-world scenario in collaboration with OpenAIRE by employing OpenAIRE research graph (ORG) services in Open Science publications.

Workplan
The proposed work is divided into two phases running in parallel. The main focus in phase I is the construction of the data collections and the adaptations and improvements needed in PDF processing tools. Phase II deals with the development of the two subsystems: claim analysis and text simplification as well as their evaluation.

Phase I
Two collections with News and scientific publications will be compiled in the areas of Health and Climate. The News collection will be built based on an existing dataset with News stories and ARC automated classification system in the areas of interest. The second collection with publications will be provided by OpenAIRE ORG service and further processed, managed and properly indexed by ARC SciNoBo toolkit. A small-scale annotation is foreseen by DFKI in support of the simplification subsystem.

Phase II
In phase II, we will be developing/advancing, finetuning and evaluating the two subsystems. Concretely, the “claim analysis” subsystem encompasses (i) ARC previous work on “claim identification”, (ii) a retrieval engine fetching relevant scientific publications (based on our previous miniProject), and (iii) an evidence-synthesis module indicating whether the publications fetched and the scientists’ claims therein, support or refute the News claim under examination.
DFKI will be examining both lexical and syntax-based representations, exploring their contribution to text simplification and evaluating (neural) simplification models on the Eval dataset. Phase II work will be led by ARC in collaboration with DFKI and OpenAIRE.

Ethics: AI is used but without raising ethical concerns related to human rights and values.

(*): Combining symbolic and sub-symbolic approaches – Improving neural QA-Systems through Document Analysis for enhanced accuracy and efficiency in Human-AI interaction.

Output

Paper(s) in Conferences:
We plan to submit at least two papers about the “claim analysis” and the “text simplification” subsystems.

Practical demonstrations, tools:
A full-fledged demonstrator showing the functionality supported will be available (expected at the last month of the project).

Project Partners

  • ILSP/ATHENA RC, Haris Papageorgiou
  • German Research Centre for Artificial Intelligence (DFKI), Julián Moreno Schneider
  • OpenAIRE, Natalia Manola

Primary Contact

Haris Papageorgiou, ILSP/ATHENA RC

Robustness verification for Concept Drift Detection

Real world data streams are rarely stationary, but subjective to concept drift, i.e., the change of distribution of the observations. Concept drift needs to be constantly monitored, so that when the trained model is no longer adequate, a new model can be trained that fits the most recent concept. Current methods of detecting concept drift typically include monitoring the performance, and triggering a signal once this drops by a certain margin. The disadvantage of this is that this method acts retroactively, i.e., when the performance has already dropped.

The field of neural network verification detects whether a neural network is susceptible to an adversarial attack, i.e., whether a given input image can be perturbed by a given epsilon, such that the output of the network changes. This indicates that this input is close to the decision boundary. When the distribution of images that are close to the decision boundary significantly changes, this indicates that concept drift is occurring, and we can proactively (before the performance drops) retrain the model. The short-term goal of this micro-project is to define ways to a) monitor the distribution of images close to the decision boundary, and b) define control systems that can act upon this notion.

Disadvantages of this are that verifying neural networks requires significant computation time, and it will take many speed-ups before this can be utilized in high-throughput streams.

Output

Conference or Journal Paper – We initially aim for top-tier venues, but will decide on the actual venue after the results and scope are determined.

Project Partners

  • Leiden University, Holger Hoos
  • INESC TEC, João Gama

Primary Contact

Holger Hoos, Leiden University

Extending Inverse Reinforcement Learning to elicit and exploit richer expert feedback by leveraging the learner’s beliefs.

Interactive Machine Learning (IML) has gained significant attention in recent years as a means for intelligent agents to learn from human feedback, demonstration, or instruction. However, many existing IML solutions primarily rely on sparse feedback, placing an unreasonable burden on the expert involved. This project aims to address this limitation by enabling the learner to leverage richer feedback from the expert, thereby accelerating the learning process. Additionally, we seek to incorporate a model of the expert to select more informative queries, further reducing the burden placed on the expert.

Objectives:
(1) Explore and develop methods for incorporating causal and contrastive feedback, as supported by evidence from psychology literature, into the learning process of IML.
(2) Design and implement a belief-based system that allows the learner to explicitly maintain beliefs about the possible expert objectives, influencing the selection of queries.
(3) Utilize the received feedback to generate a posterior that informs subsequent queries and enhances the learning process within the framework of Inverse Reinforcement Learning (IRL).

The project addresses several key aspects highlighted in the workpackage on Collaboration with AI Systems (W1-2). Firstly, it focuses on AI systems that can communicate and understand descriptions of situations, goals, intentions, or operational plans to establish shared understanding for collaboration. By explicitly maintaining beliefs about the expert’s objectives and integrating causal and contrastive feedback, the system aims to establish a common ground and improve collaboration.
Furthermore, the project aligns with the objective of systems that can explain their internal models by providing additional information to justify statements and answer questions. By utilizing the received feedback to generate a posterior and enhance the learning process, the system aims to provide explanations, verify facts, and answer questions, contributing to a deeper understanding and shared representation between the AI system and the human expert.
The project also demonstrates the ambition of enabling two-way interaction between AI systems and humans, constructing shared representations, and allowing for the adaptation of representations in response to information exchange. By providing tangible results, such as user-study evaluations and methods to exploit prior knowledge about the expert, the project aims to make measurable progress toward collaborative AI.

Output

(1) Identification and development of potential informative feedback mechanisms that are more user-friendly, with a focus on determining the appropriate form of queries.
(2) User-study evaluation results that measure the correctness of the information provided by the human and assess the cognitive overhead involved.
(3) Methods to exploit prior knowledge about the expert to improve learning and reduce the burden placed on them, specifically in terms of how to query.
(4) Integration of richer feedback from the expert, including causal knowledge and contrastive information, into the learning process.
(5) Publication of a peer-reviewed paper in a competitive venue, presenting the research findings and contributions to the field.
(6) Creation of a GitHub repository containing all necessary materials to replicate the results and support further research endeavors.

Project Partners

  • ISIR, Sorbonne University, Silvia Tulli
  • Colorado State University, Sarath Sreedharan
  • ISIR, Sorbonne University, Mohamed Chetouani

Primary Contact

Silvia Tulli, ISIR, Sorbonne University

Accelerating nurse training without impacting the quality of education, by leveraging LWM (large whatever models) to provide individual feedback to students and help teachers how to optimize teaching.

High-quality education and training of nurses are of utmost importance to keep high standards in medical care. Nevertheless, as the covid pandemic has shown quite impressively, there are too few healthcare professionals available. Therefore, education and training of nurse students, or adapting the training of nurses is challenged to accelerate, to have manpower of nurses available when it is required. Still, accelerating training often comes with reduced quality, which can easily lead to bad qualifications and, in the worst case, to a lethal outcome.
Thus, in nurse training a pressing question is, how to optimize and with it accelerate training without suffering in quality.
One of the significant questions for teachers in training nurse students is to understand the state of a student’s education. Are some students in need of more repetitions? which students can proceed to the next level, who is ready to get in contact with actual patients? In this regard, optimization of training means to individualize, not only individualize the training of students but also individualize the feedback and information a teacher gets about their way of teaching.

We believe this to be a field where Artificial Intelligence (AI) and more specifically the application of foundational models (LLMs large language models, paired with other methods of machine learning) can provide real support.

In the first part of this microproject, together with Nurse-Teachers of the University of Southampton, we want to define and design an LWM that fits the requirements of nurse training. For this, 2-3 nurse teachers from Southampton will visit DFKI in order to get a feeling for systems that are available, and also what applications are feasible. In turn, researchers of DFKI will visit the nurse training facilities in Southampton to get a better picture of how nurse training is conducted. At the end of this first phase of the microproject, an LWM (large whatever model) is defined (existing LLMs combined with additional features and data sources, as required).

In the second phase, this LWM will be implemented and tested against videos of recorded training sessions. Specific focus will be set on:
• How to understand the action of a particular person?
• Actions taken by the trainee, are they correct or false? What would have been the correct action?
• Which teaching efforts work and which do not as much?
• Which useful suggestions and feedback can be provided to the trainees and teachers?

Depending on the outcome of this microproject, in a follow-up project, an online LWM system could be installed at the facilities of the University of Southampton, where the effects of direct feedback on teaching and performance, could be evaluated.

Output

1) Definition and design of the LWM will be documented and if possible published in an adequate scientific journal
2) Developed algorithms and results will be published at a scientific conference (AI and possibly also medical)
3) The developed LWM will be made available to be used in a follow-up project

Project Partners

  • DFKI, EI, Agnes Grünerbl
  • Health Department, Unviversity of Southampton, Eloise Monger

Primary Contact

Agnes Grünerbl, DFKI, EI

Build human-in-the-loop intelligent systems for the geolocation of social media images in natural disasters

Social media generate large amounts of almost real-time data which can turn out extremely valuable in an emergency situation, specially for providing information within the first 72 hours after a disaster event. Despite there is abundant state-of-the-art machine learning techniques to automatically classify social media images and some work for geolocating them, the operational problem in the event of a new disaster remains unsolved.
Currently the state-of-the-art approach for dealing with these first response mapping is first filtering and then submitting the images to be geolocated to a crowd of volunteers [1], assigning the images randomly to the volunteers.

The project is aimed at leveraging the power of crowdsourcing and artificial intelligence (AI) to assist emergency responders and disaster relief organizations in building a damage map from a zone recently hit by a disaster.

Specifically, the project will involve the development of a platform that can intelligently distribute geolocation tasks to a crowd of volunteers based on their skills. The platform will use machine learning to determine the skills of the volunteers based on previous geolocation experiences.

Thus, the project will concentrate on two different tasks:
• Profile Learning. Based on the previous geolocations of a set of volunteers, learn a profile of each of the volunteers which encodes its geolocation capabilities. This profiles should be unterstood as competency maps of the volunteer, representing the capability of the volunteer to provide an accurate geolocation for an image coming from a specific geographical area.
• Active Task Assigment. Use the volunteer profiles efficiently in order to maximize the geolocation quality while maintaining a fair distribution of geolocation tasks among volunteers.

On a first stage we envision an experimental framework with realistically generated artificial data, which acts as a feasibility study. This will be published as a paper in a major conference or journal. Simultaneously we plan to integrate both the profile learning and the active task assignment with the crowdnalysis library, a software outcome of our previous micro-project. Furthermore, we plan to organize a geolocation workshop to take place in Barcelona with participation from the JRC, University of Geneva, United Nations, and IIIA-CSIC.

In the near future, the system will generate reports and visualizations to help these organizations quickly understand the distribution of damages. The resulting platform could enable more efficient and effective responses to natural disasters, potentially saving lives and reducing the impact of these events on communities.
The microproject will be developed by IIIA-CSIC and the University of Geneva. The micro project is also of interest to the team lead by Valerio Lorini at the Joint Research Center of the European Commission @ Ispra, Italy, who will most likely attend the geolocation workshop which we will be putting forward.

The project is in line with “Establishing Common Ground for Collaboration with AI Systems (WP 1-2)”, because it is a microproject that ” that seeks to provide practical demonstrations, tools, or new theoretical models for AI systems that can collaborate with and empower individuals or groups of people to attain shared goals” as is specifically mentioned in the Call for Microprojects.

The project is also in line with “Measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale (WP4)” since it ecomprises the design of a human-centered AI architectures that balance individual and collective goals for the task of geolocation.

[1] Fathi, Ramian, Dennis Thom, Steffen Koch, Thomas Ertl, and Frank Fiedrich. “VOST: A Case Study in Voluntary Digital Participation for Collaborative Emergency Management.” Information Processing & Management 57, no. 4 (July 1, 2020): 102174. https://doi.org/10.1016/j.ipm.2019.102174.

Output

– Open source implementation of the volunteer profiling and consensus geolocation algorithms into the crowdnalysis library.
– Paper with the evaluation of the different geolocation consensus and active strategies for geolocation
– Organization of a one day workshop with United Nations, JRC, University of Geneva, CSIC

Project Partners

  • Consejo Superior de Investigaciones Científicas (CSIC), Jesus Cerquides
  • University of Geneva, Jose Luis Fernandez Marquez

Primary Contact

Jesus Cerquides, Consejo Superior de Investigaciones Científicas (CSIC)

Develop AI interactive grounding capabilities in collaborative tasks using a game-based mixed reality scenario that require physical actions.

The project addresses research on interactive grounding. It consists of the development of an Augmented Reality (AR) game, using HoloLens, that supports the interaction of a human player with an AI character in a mixed reality setting using gestures as the main communicative act. The game will integrate technology to perceive human gestures and poses. The game will bring about collaborative tasks that need coordination at the level of mutual understanding of the several elements of the required task. Players (human and AI) will have different information about the tasks to advance in the game and need to communicate that information to their partners through gestures. The main grounding challenge will be based on learning the mapping between gestures to the meaning of actions to perform in the game. There will be two levels of gestures to ground, some are task-independent while others are task-dependent. In other words, besides the gestures that communicate explicit information about the game task, the players need to agree on the gestures used to coordinate the communication itself, for example, to signal agreement or doubt, to ask for more information, or close the communication. These latter gesture types can be transferred from task to task within the game, and probably to other contexts as well.
It will be possible to play the game with two humans and study their gesture communication in order to gather the gestures that emerge: a human-inspired gesture set will be collected and serve the creation of a gesture dictionary in the AI repertoire.
The game will provide different tasks of increasing difficulty. The first ones will ask the players to perform gestures or poses as mechanisms to open a door to progress to the next level. But later, in a more advanced version of the game, specific and constrained body poses, interaction with objects, and the need to communicate more abstract concepts (e.g., next to, under, to the right, the biggest one, …) will be introduced.
The game will be built as a platform to perform studies. It will support studying diverse questions about the interactive grounding of gestures. For example, we can study the way people adapt to and ascribe meaning to the gestures performed by the AI agent, we can study how different gesture profiles influence the people’s interpretation, facilitate grounding, and have an impact on the performance of the tasks, or we can study different mechanisms on the AI to learn its gesture repertoire from humans (e.g., by imitation grounded on the context).
We see this project as a relevant contribution to the upcoming Macro Project on Interactive Grounding, and we would like the opportunity to join the MP later. Our focus is on the grounding based on gestures being critical in certain scenarios. The setting can include language if vocalization is allowed and can be heard. Our game scenarios are simple and abstract and can be the basis for realistic ones.

Output

A game that serves as a platform for studying grounding in the context of collaborative tasks using gestures.
A repertoire of gestures to be used in the communication between humans and AI in a collaborative task that relies on the execution of physical actions. We will emphasize the gestures that can be task-independent.
The basis for an AI algorithm to ground gestures to meaning adapted to a particular user.
One or two papers, describing the platform and a study with people.

Project Partners

  • Instituto Superior Técnico (IST), Rui Prada
  • Eötvös Loránd University, András Lőrincz
  • DFKI Lower Saxony, Daniel Sonntag
  • CMU, László Jeni

Primary Contact

Rui Prada, Instituto Superior Técnico (IST)

Exploring the balance between ownership and AI assistance in creative collaboration through an interactive exhibition.

Novel AI systems enable individuals to maximize their creative potential by rapidly prototyping ideas based on initial sketches or idea descriptions. A generative AI system is the bridge between an individual’s thought and its physical manifestation. Traditional approaches, on the other hand, require a greater investment of effort, involvement, and time, which was historically associated with a sense of ownership over the creation and agency (or control) over the creation process. As the paradigm shift caused by AI significantly reduces the amount of work required to achieve a desired result, individuals consistently report low agency and ownership over their creations, and such boundaries are unclear even in the legal sphere. Therefore, it is essential to understand how these variables can be balanced to foster a strong sense of ownership while allowing users to fully exploit the potential of AI systems.

In this project, we seek to achieve this understanding by creating an interactive exhibition where visitors to a science museum will interact with a generative AI system to create illustrations for a children’s book based on rough sketches and prompts. The participants will be instructed to collaborate with an image-generating AI system to illustrate a children’s storybook with a simple plot. Participants will start with their own sketch or by selecting one from a set. When an illustration is finished, participants will be asked if they want to sign the illustration with their name, the name of the AI model, or both. Participants will have the option to display their illustrations on the exhibition’s billboard. We will conclude by asking them three brief questions about self-efficacy.

The interaction will be logged to record the degree of intervention (iterating over the illustrations, using a starting sketch instead of drawing their own sketch, signature ownership). We plan to carry out a limited quantitative study with observations of visitors’ behavior paired with interviews. With the collected data, we will be able to analyze the correlations between time, effort, ownership, and self-efficacy in the AI-assisted creative process, and ultimately gain insights into how to design such systems to promote a sense of ownership in the user.

This project falls under WP3. It examines the Pragmatic aspects of communication and collaboration between humans and AI by exploring how participants collaborate with an AI system to translate their initial sketches or prompts into meaningful illustrations for visual narratives for Storytelling. Everything from the lens of influence of the participants’ sense of ownership and agency and how it impacts the outcome and design process.

Output

A manuscript reporting the intervention and the results of the field-study.

A video explanation of the intervention and the insights gained from it.

An open-source repository of the materials used in the intervention.

Project Partners

  • Ludwig-Maximilians-Universität München (LMU), Sebastian Feger
  • Sheffield Hallam University Enterprises Limited, Daniela Petrelli

Primary Contact

Steeven Villa, Ludwig-Maximilians-Universität München (LMU)

Ethical AI begins with language

Prof. Dr. Karen Joisten (karen.joisten@rptu.de)
Dr. Ettore Barbagallo (barbagallo@sowi.uni-kl.de)

Due to the ongoing advancements of AI technologies, we will have to face a totally new ethical problem that never occurred with other technologies before, that is the problem of the increasing resemblance between AI systems and biological systems, especially human beings and animals. This resemblance will gradually make it more obvious for us to attribute human or animal qualities to AI systems, even if we know that they are not self-conscious or alive. We are not able to predict the consequences on the social, psychological, educational, political, and economical level of the spread of such AI systems. In our meta-project, we want to address this problem from an ethical point of view.
In the first five months, we will base our analysis on the Ethics Guidelines for Trustworthy AI (2019) written by the High-Level Expert Group on AI (AI HLEG) set up by the European Commission. We will focus in particular on the language used by the AI HLEG for describing AI systems’ activity and the human-machine interaction. The focus on language is philosophically motivated by the close correlation existing between language, habits (see Aristotle), and practical as well as emotional relationship with the world.
Over the following three months we will try to generalize the results of our analysis. We will propose some examples of how an adequate linguistic practice can help us to make sharp terminological and conceptual distinctions and so describe and understand the human-AI interaction correctly.
These two steps (8 months) are the first phase of a larger project that only in its second phase will expand through collaboration with one partner from the HumanE AI Net community or one external partner.

Connection of Results to Work Package Objectives:
WP5 is concerned with AI ethics and responsible AI. Our project wants to address the responsibility of our linguistic practices with regard to AI. The way in which we speak about AI and the human-AI interaction creates habits, shapes our practical and emotional relationship with the machines and therefore has ethical consequences.
WP3 deals with the human-AI collaboration and interaction. Our project will address the language we use to talk about AI and to describe the interaction between us and AI systems.

Output

1) Paper: a) that analyzes the Ethics Guidelines for Trustworthy AI paying attention to the way in which the human-AI interaction is presented; b) that develops clear-cut concepts as part of an appropriate vocabulary for the description of the human-AI interaction.
2) Discussion of the outcomes of the paper with the AI HLEG.
3) Providing the outcomes of the microproject within HumanE AI Net to find a partner for project expansion.

Project Partners

  • RPTU-Kaiserslautern, /

Primary Contact

Karen Joisten, RPTU-Kaiserslautern

A graduate level educational module (12 lectures + 5 assignments) covering basic principles and techniques of Human-Interactive Robot Learning.

Human-Interactive Robot Learning (HIRL) is an area of robotics that focuses on developing robots that can learn from and interact with humans. This educational module aims to cover the basic principles and techniques of Human-Interactive Robot Learning. This interdisciplinary module will encourage graduate students (Master/PhD level) to connect different bodies of knowledge within the broad field of Artificial Intelligence, with insights from Robotics, Machine Learning, Human Modelling, and Design and Ethics. The module is meant for Master’s and PhD students in STEM, such as Computer Science, Artificial Intelligence, and Cognitive Science.
This work will extend the tutorial presented in the context of the International Conference on Algorithms, Computing, and Artificial Intelligence (ACAI 2021) and will be shared with the Artificial Intelligence Doctoral Academy (AIDA). Moreover, the proposed lectures and assignments will be used as teaching material at Sorbonne University, and Vrije Universiteit Amsterdam.
We plan to design a collection of approximately 12 1.5-hour lectures, 5 assignments, and a list of recommended readings, organized along relevant topics surrounding HIRL. Each lecture will include an algorithmic part and a practical example of how to integrate such an algorithm into an interactive system.
The assignments will encompass the replication of existing algorithms with the possibility for the student to develop their own alternative solutions.

Proposed module contents (each lecture approx. 1.5 hour):
(1) Interactive Machine Learning vs Machine Learning – 1 lecture
(2) Interactive Machine Learning vs Interactive Robot Learning (Embodied vs non-embodied agents) – 1 lecture
(3) Fundamentals of Reinforcement Learning – 2 lectures
(4) Learning strategies: observation, demonstration, instruction, or feedback
– Imitation Learning, Learning from Demonstration – 2 lectures
– Learning from Human Feedback: evaluative, descriptive, imperative, contrastive examples – 3 lectures
(5) Evaluation metrics and benchmarks – 1 lecture
(6) Application scenarios: hands-on session – 1 lecture
(7) Design and ethical considerations – 1 lecture

Output

Learning objectives along Dublin descriptors:
(1) Knowledge and understanding;
– Be aware of the human interventions in standard machine learning and interactive machine learning.
– Understand human teaching strategies
– Gain knowledge about learning from feedback, demonstrations, and instructions.
– Explore ongoing works on how human teaching biases could be modeled.
– Discover applications of interactive robot learning.
(2) Applying knowledge and understanding;
– Implement HIRL techniques that integrate different types of human input
(3) Making judgments;
– Make informed design choices when building HIRL systems
(4) Communication skills;
– Effectively communicate about own work both verbally and in a written manner
(5) Learning skills;
– Integrate insights from theoretical material presented in the lecture and research papers showcasing state-of-the-art HIRL techniques.

Project Partners

  • ISIR, Sorbonne University, Mohamed Chetouani
  • ISIR, Sorbonne University, Silvia Tulli
  • Vrije Universiteit Amsterdam, Kim Baraka

Primary Contact

Mohamed Chetouani, ISIR, Sorbonne University