Contact person: Florian Müller, ( florian.mueller@um.ifi.lmu.de

Internal Partners:

  1. LMU Munich, Florian Müller, florian.mueller@um.ifi.lmu.de Institution
  2. University Warsaw, Andrzej Nowak, andrzejn232@gmail.com

 

When we go for a walk with friends, we can observe an interesting effect: From step lengths to arm movements – our movements unconsciously align; they synchronize. Prior research in social psychology found that this synchronization is a crucial aspect of human relations that strengthens social cohesion and trust. In this micro project, we explored if and how this effect generalizes beyond human-human relationships. We hypothesized that synchronization can enhance the relationship between humans and AI systems by increasing the sense of connectedness in the formation of techno-social teams working together on a task.

Results Summary

To evaluate the feasibility of this approach, we built a prototype of a simple non-humanoid robot as an embodied representation of an AI system. The robot tracks the upper body movements of people in its vicinity and can bend to follow human movements and vary the movement synchronization patterns. Using this prototype, we conducted a controlled experiment with 51 participants exploring our concept in a between-subjects design. We found significantly higher ratings on trust between people and automation in an established questionnaire for synchronized movements. However, we could not find an influence on the willingness to spend money in a trust game inspired by behavioral economics. Taken together, our results strongly suggest a positive effect of synchronized movement on the participants’ feeling of trust toward embodied AI representations.

Tangible Outcomes

  1. Wieslaw Bartkowski, Andrzej Nowak, Filip Ignacy Czajkowski, Albrecht Schmidt, and Florian Müller. 2023. In Sync: Exploring Synchronization to Increase Trust Between Humans and Non-humanoid Robots. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3544548.3581193
  2. Short: https://syncandshare.lrz.de/getlink/fiRjwbk1AoYxKujaEaZ5ax/in_sync_video_short.mp4 
  3. Full: https://syncandshare.lrz.de/getlink/fiGEX3bGahhbzrChUiXqvL/in_sync_video_full.mp4 

Contact person: Jasmin Grosinger (jasmin.grosinger@oru.se)

Internal Partners:

  1. Örebro University, ORU, Jasmin Grosinger

External Partners:

  1. Denmark Technical Unisersity, Thomas Bolander

 

Previously we have investigated how an AI system can be proactive, that is, acting anticipatory and on its own initiative, by reasoning on current and future states, mental simulation of actions and their effects, and what is desirable. In this micro-project we extend our earlier work doing epistemic reasoning. That is, we want to do reasoning on knowledge and belief of the human and by that inform the AI system what kind of proactive announcement to make to the human. As in our previous work, we will consider which states are desirable and which are not, and we too will take into account how the state will evolve into the future, if the AI system does not act. Now we also want to consider the human’s false beliefs. It is not necessary and, in fact, not desirable to make announcements to correct each and any false belief that the human may have. For example, if the human is watching the TV, she need not be informed that the salt is in the red container and the sugar is in the blue container, while the human’s belief is that it is the other way around. On the other hand, when the human starts cooking and is about to use the content of the blue container believing it is salt, then it is a relevant announcement of the AI system to inform the human what is actually the case to avoid undesirable outcomes. The example shows that we need to research not only what to announce but also when to make the announcement. The methods we will use in this micro-project are knowledge-based, to be precise, we will employ Dynamic Epistemic Logic (DEL). DEL is a modal logic. It is an extension of Epistemic Logic which allows to model change in knowledge and belief of an agent herself and of other agents.1 week of visit is planned.

Results Summary

[On going project] The project is still going on. It turned out to be much bigger and is way beyond the scope of a micro-project. Also there were interruptions. We are working on our DEL-based framework for proactive agents and expect a journal article submission in January next year. The project will keep going on at least until then, but is expected to continue and extend the current status of the work.

Contact person:  Mohamed CHETOUANI (mohamed.chetouani@sorbonne-universite.fr)

Internal Partners:

  1. Sorbonne Université, Mohamed CHETOUANI
  2. Örebro University (ORU), Alessandro Saffioti and Jasmin Grosinger

External Partners:

  1. SoftBank Robotics Europe (france), Sera Buyukgoz

 

We study proactive communicative behavior, where robots provide information to humans which may help them to achieve desired outcomes, or to prevent possible undesired ones. Proactive behavior in an under-addressed area in AI and robotics, and proactive human- robot communication is even more so. We will combine the past expertise of Sorbonne Univ. (intention recognition) and Orebro Univ. (proactive behavior) to define proactive behavior based on the understanding of user’s intentions, and then extend it to consider communicative actions based on second-order perspective awareness. We propose an architecture able to(1) estimate the human’s intention of goal,(2) infer robot’s and human’s knowledge about foreseen possible upcoming outcomes of intended goal,(3) detect opportunities for desirability of intended goal to robot be proactive,(4) select action from the listed opportunities. The theoretical underpinning of this work will contribute to the study of theory of mind in HRI.

Results Summary

The goal of this micro-project is to develop a cognitive architecture able to generate proactive communicative behaviors during human-robot interactions. The general idea is to provide information to humans which may help them to achieve desired outcomes, or to prevent possible undesired ones. Our work proposes a framework that generates and selects among opportunities for acting based on recognizing human intention, predicting environment changes, and reasoning about what is desirable in general. Our framework has two main modules to initiate proactive behavior; intention recognition and equilibrium maintenance.

The main achievements are:

  • Integration of two systems: user intention recognition and equilibrium maintenance in a generic architecture
  • Showing stability of the architecture to many users
  • Reasoning mechanism and 2nd order perspective awareness

The next steps will aim to show knowledge repair, prevent outcomes of lack of knowledge and improve trustability, transparency and legibility (user study)

Tangible Outcomes

  1. [arxiv] “Two ways to make your robot proactive: reasoning about human intentions, or reasoning about possible futures”. Sera Buyukgoz, Jasmin Grosinger, Mohamed Chetouani, Alessandro Saffiotti. . arXiv:2205.05492 [cs.AI] 2022. DOI: 10.48550/ARXIV.2205.05492 https://arxiv.org/abs/2205.05492
  2. Program/code: Proactive Behavior Generation – Open Source System– Sera Buyukgoz, Mohamed Chetouani, Jasmin Grosinger, andAlessandro Saffiotti
    https://github.com/serabuyukgoz/proactive_robot_sim.git
  3. Program/code: Playground, Jupyter Notebook / Google Colab – Sera Buyukgoz, Mohamed Chetouani, Jasmin Grosinger, andAlessandro Saffiotti https://colab.research.google.com/drive/1yETA0iyKZb23790-uj9jEp6fIXfSfTSe?usp=sharing 
  4. Video presentation summarizing the project

Contact person: Carmela Comito, CNR (carmela.comito@icar.cnr.it

Internal Partners:

  1. Consiglio Nazionale delle Ricerche (CNR), Carmela Comito, carmela.comito@icar.cnr.it
  2. Umeå University (UMU), Nina Khairova, nina.khairova@umu.se
  3. Università di Bologna (UNIBO), Andrea Galassi, p.torroni@unibo.it
  4. TILDE  

 

In this project, we work with a Ukrainian academic refugee, to combine methods for semantic text similarity with expert human knowledge in a participatory way to develop a training corpus that includes news articles containing information on extremism and terrorism.

Results Summary

1) Collection and curation of two event-based datasets of news about Russian-Ukrainianwar.

The datasets support analysis of information alteration among news outlets (agency and media) with a particular focus on Russian, Ukrainian, Western (EU and USA), and international news sources, over the period from February to September 2022. We manually selected some critical events of the Russian-Ukrainian war. Then, for each event, we created a short list of language-specific keywords. The chosen languages for the keywords are Ukrainian, Russian, and English. Finally, besides the scraping operation over the selected sources, we also gather articles using an external news intelligence platform, named Event Registry which keeps track of world events and analyzes media in real-time. Using this platform we were able to collect more articles from a larger number of news outlets and expand the dataset with two distinct article sets. The final version of the RUWA Dataset is thus composed of two distinct partitions.

2) Development of an unsupervised methodology to establish whether news from the various parties are similar enough to say they reflect each other or, instead, they are completely divergent and therefore one is likely not trustworthy. We focused on textual and semantic similarity (sentence embeddings methods such as Sentence-BERT), comparing the news and assess if they have a similar meaning. Another contribution of the proposed methodology is a comparative analysis of the different media sources in terms of sentiments and emotions, extracting subjective points of view as they are reported in texts,

combining a variety of NLP-based AI techniques and sentence embeddings techniques. Finally, we applied NLP techniques to detect propaganda in news article, relying on self supervised NLP systems such as RoBERTa and existing adequate propaganda datasets.

3) Preliminary Qualitative results:

When the events concern civilians all sources are very dissimilar. But Ukraine and Western are more similar. When the event is military targets, Russian and Ukraine sources are very dissimilar from other sources, there is more propaganda in Ukraine and Russian Ones.

Tangible Outcomes

  1. Github repository of datasets and software: https://github.com/fablos/ruwa-dataset

Contact person: Robin Welsch (robin.welsch@aalto.fi)

Internal Partners:

  1. LMU Munich, Florian Müller, florian.mueller@um.ifi.lmu.de,
  2. Aalto University, Robin Welsch, robin.welsch@aalto.fi

 

Generative models such as large language models (LLM) are a versatile AI tool for people from various domains, with varying backgrounds and goals. However, it is often challenging to formulate and refine suitable prompts that are the foundation for interaction with these models, especially for non-experts. To build human-centered AI interfaces, non-experts must be empowered to establish common ground when interacting with AI.

The aim of this project is to investigate how prompt-based AIs can establish a common ground of what is desirable on the user side and feasible on the model side. In this MP, motivated by a requirement analysis, we designed a study to find out how people varying in AI-expertise design prompts.

Results Summary

We have run a large-scale survey (1500 respondents) to identify use patterns as a function of AI-expertise and demographic data (Data will be made openly available soon). We are now designing a second study in which we re invite a set of participants varying in AI – expertise to gather their prompts in a set of representative interactive tasks. From these, we will derive design recommendations for Human-AI collaboration support for levels of AI expertise. We have published the study under pre-print and it has already gained more than 10 citations.

Tangible Outcomes

  1. [arxiv] Gender, Age, and Technology Education Influence the Adoption and Appropriation of LLMs Fiona Draxler, Daniel Buschek, Mikke Tavast, Perttu Hämäläinen, Albrecht Schmidt, Juhi Kulshrestha, Robin Welsch https://arxiv.org/abs/2310.06556 
  2. Data can be found here: https://osf.io/bdn9p/view_only=1a44dfd53cc1442a87bfc3f49560b112 
  3.  A pre-registration for the demographic study: https://aspredicted.org/VCN_CCS

Contact person: Agnes Grünerbl (agnes.gruenerbl@dfki.de

Internal Partners:

  1. DFKI, Agnes Gruenerbl, Passant Elagroudy, and Paul Lukowicz  

External Partners:

  1. RPTU Landau, Thomas Lachmann and Jan Spilski
  2. Keio University, Giulia Barbareschi and Kai Kunze  

 

The main goal of the Humane AI Net project is to build up a network of AI research mainly within Europe. Nevertheless, since the recent UbiCHAI – experimental methodologies for cognitive human augmentation -Tutorial held at the Ubicomp Conference and co-sponsored by the HAI Net project, received great feedback and drew the attention of more attendees than initially expected, a follow-up Workshop would help to strengthen and extend the international connections HAI Net could build during this Ubicomp Tutorial.

A possible Conference that would fit nicely to both, the scope of the UbiCHAI Tutorial as well as the broad range of HAI Net, is the Augmented Humans conference.

The Augmented Humans community is also a rather young but vibrant community and has been around constantly for 10 years. With their goal to augment humans, Augmented Humans have a similar focus as the Humane AI Net community. As stated on their website: “The conference focuses on physical, cognitive, and perceptual augmentation of humans through digital technologies. The plural – humans – emphasizes the move towards technologies that enhance human capabilities beyond the individual and will have the potential for impact on a societal scale. The idea of augmenting the human intellect has a long tradition, the term was coined by Douglas Engelbart in 1962. Today, many of the technologies envisioned by Engelbart and others are commonplace, and looking towards the future, many technologies which amplify the human body and mind far beyond the original vision are within reach.”

The joint goals of Humane AI Net and Augmented Humans, of social, cognitive, and perceptual augmentation of the human, seem perfect to host a HAI Net International Workshop on Ubiquitous Technologies for Cognitive enhancement of Human-centred AI (UbiCHAI) at the Augmented Humans conference, to connect both communities.

We aim for a full-day workshop connecting researchers in the different aspects of Hybrid-Human-AI with Cognitive and Social Science to Augment the Human and provide a platform where research can be presented and new ideas can be developed in the scope of Cognitive perception of AI over into the fields of social behavior, health- and mental care, subject didactics, digitalization, economy, and others.

Results Summary

This workshop was a collaboration with the Cognitive and Developmental Psychology at the RPTU Kaiserslautern, and the Media Design, Keio University, Japan. After initial rejection from the Augmented Humans conference, we submitted the idea to this Workshop to the MobileHCI conference, which was hosted in Melbourne, Australia as well this year. One of the reasons to host this workshop in Australia was to build up connections for the Human AI Net network to Australia as well (after hosting events in Mexico and Japan). The workshop was quite successful and gained a lot of interest from the attendants of the conference including the local chairs and organizers of the conference attending the workshop. Thus our workshop turned into the largest workshop at the MobileHCI conference by far (25+ attendees). A highlight of the Workshop itself was that we could win Prof. Thad Starner from Georgia Tech as a Key-Note Speaker and attendee. The workshop theme was to look at methods to: sense, simulate, influence, and evaluate cognitive functions using Human-Centered AI. Cognitive functions refer to perception, attention, memory, language, problem solving, reasoning, and decision making. We had 8 paper submissions to the workshop and as a follow up to the work done in the workshop, one of the organizers (Passant Elagroudy) was invited to attend a Dagstuhl seminar in 2025 about cognitive augmentation.

Tangible Outcomes

  1. Passant Elagroudy, Agnes Grünerbl, Giulia Barbareschi, Jan Spilski, Kai Kunze, Thomas Lachmann, Paul Lukowicz: mobiCHAI – 1st International Workshop on Mobile Cognition-Altering Technologies (CAT) using Human-Centered AI. MobileHCI (Companion) 2024: 31:1-31:5 https://dl.acm.org/doi/abs/10.1145/3640471.3680462 
  2. Workshop in MobileHCI’24 in Melbourne, Australia http://ai-enhanced-cognition.com/mobichai/ https://mobilehci.acm.org/2024/acceptedworkshops.php

Contact person: Haris Papageorgiuo (haris@athenarc.gr)

Internal Partners:

  1. ATHENA RC, Haris Papageorgiou
  2. German Research Centre for Artificial Intelligence (DFKI), Julián Moreno Schneider
  3. OpenAIRE, Natalia Manola

 

SciNoBo is a microproject focused on enhancing science communication, particularly in health and climate change topics, by integrating AI systems with science journalism. The project aims to assist science communicators—such as journalists and policymakers—by utilizing AI to identify, verify, and simplify complex scientific statements found in mass media. By grounding these statements in scientific evidence, the AI will help ensure accurate dissemination of information to non-expert audiences. This approach builds on prior work involving neuro-symbolic question-answering systems and aims to leverage advanced language models, argumentation mining, and text simplification technologies. Technologically, we build on our previous MP work on neuro-symbolic Q&A (*) and further exploit and advance recent developments in instruction fine-tuning of large language models, retrieval augmentation and natural language understanding – specifically the NLP areas of argumentation mining, claim verification and text (ie, lexical and syntactic) simplification. The proposed MP addresses the topic of “Collaborative AI” by developing an AI system equipped with innovative NLP tools that can collaborate with humans (ie, science communicators -SCs) communicating statements on Health & Climate Change topics, grounding them on scientific evidence (Interactive grounding) and providing explanations in simplified language, thus, facilitating SCs in science communication. The innovative AI solution will be tested on a real-world scenario in collaboration with OpenAIRE by employing OpenAIRE research graph (ORG) services in Open Science publications.

Results Summary

The project is divided into two phases that ran in parallel. The main focus in phase I is the construction of the data collections and the adaptations and improvements needed in PDF processing tools. Phase II deals with the development of the two subsystems: claim analysis and text simplification as well as their evaluation.

  • Phase I: Two collections with News and scientific publications will be compiled in the areas of Health and Climate. The News collection will be built based on an existing dataset with News stories and ARC automated classification system in the areas of interest. The second collection with publications will be provided by OpenAIRE ORG service and further processed, managed and properly indexed by ARC SciNoBo toolkit. A small-scale annotation is foreseen by DFKI in support of the simplification subsystem.
  • Phase II: We developed, fine tuned and evaluated the two subsystems. Concretely, the “claim analysis” subsystem encompasses (i) ARC previous work on “claim identification”, (ii) a retrieval engine fetching relevant scientific publications (based on our previous miniProject), and (iii) an evidence-synthesis module indicating whether the publications fetched and the scientists’ claims therein, support or refute the News claim under examination.

 

Tangible Outcomes

  1. Kotitsas, S., Kounoudis, P., Koutli, E., & Papageorgiou, H. (2024, March). Leveraging fine-tuned Large Language Models with LoRA for Effective Claim, Claimer, and Claim Object Detection. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2540-2554).  https://aclanthology.org/2024.eacl-long.156/ 
  2. HCN dataset: news articles in the domain of Health and Climate Change. The dataset contains news articles, annotated with the major claim, claimer(s) and claim object(s). https://github.com/iNoBo/news_claim_analysis 
  3. Website demo: http://scinobo.ilsp.gr:1997/services 
  4. Services for claim identification and the retrieval engine http://scinobo.ilsp.gr:1997/live-demo?HFSpace=inobo-scinobo-claim-verification.hf.space 
  5. Service for the text simplification http://scinobo.ilsp.gr:1997/text-simplification 

Contact person: Jennifer Renoux (jennifer.renoux@oru.se)

Internal Partners:

  1. Örebro University (ORU), Jennifer Renoux
  2. Instituto Superior Técnico (IST), Ana Paiva

 

Social dilemmas are situations in which the interests of the individuals’ conflict with those of the team, and in which maximum benefit can be achieved if enough individuals adopt prosocial behavior (i.e., focus on the team’s benefit at their own expense). In a human-agent team, the adoption of prosocial behavior is influenced by various features displayed by the artificial agent, such as transparency, or small talk. One feature still unstudied is expository communication, meaning communication performed with the intent of providing factual information without favoring any party.We implemented a public goods game with information asymmetry (i.e., agents in the game do not have the same information about the environment) and performed a user-study in which we manipulated the amount of information that the artificial agent provides to the team and examined how varying levels of information increase or decrease human prosocial behavior.

Results Summary

This micro-project has led to the design and development of an experimental platform to test how communication from an artificial agent influences a human’s pro-social behavior.

The platform comprises the following components:

– a fully configurable mixed-motive public good game, allowing a single human player to play with artificial agents, and an artificial “coach” giving feedback on the human’s action. Configuration is made through json files (number and types of agents, type of feedback, game configuration…). The game is called “Pest Control”, which implements a public good game during which players must prevent a spreading pest from reaching their farm while gathering as many coins as possible. An artificial agent can give feedback to the player. In this implementation, only one human player can control the game and 4 artificial agents are playing with them. This game has been used as the base for a user study investigating the impact of expository information on a human’s prosociality.

– a set of questionnaires designed to evaluate the prosocial behavior of the human player during a game

Tangible Outcomes

  1. Pest control game demo https://jrenoux.github.io/pestcontrolgame/demo/index.html
  2. The Pest Control Game experimental platform – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva https://github.com/jrenoux/pest-control-game-source
  3. Video presentation summarizing the project

Contact person: Albrecht Schmidt (albrecht.schmidt@um.ifi.lmu.de), Ana Paiva (paiva.a@gmail.com)

Internal Partners:

  1. Ludwig-Maximilians-Universität München (LMU), Albrecht Schmidt
  2. Instituto Superior Técnico (IST), Ana Paiva

External Partners:

  1. Tampere University, Finland, Kaisa Väänänen

 

Various types of robots are entering many contexts of life, such as homes, public spaces and factories. Social robots interact with people using conventions and interaction modalities that are prevalent in human-human interaction. While many social robots are anthropomorphic, non-anthropomorphic robots, such as vacuum cleaners, lawn mowers, and barista robots, are getting more common for everyday tasks. The purpose of this research is to explore how people’s interaction with non-anthropomorphic robots can benefit from human-like social cues.

The main research question of this micro-project is: What kind of social cues can be used for human-robot interaction (HRI) with non-anthropomorphic robots? Social interactions will be supported by robots’ gestures, sounds and visual cues. Different robotic parts such as arms or antennas can be designed to extend expressivity. The designs were tested with a specified scenario and user group, such as factory workers or elderly people. The exact scenario will be defined at the start of the project. Two parallel studies were conducted, one in LMU and one in ITS. In LMU, a prototype or mock-up was designed and built. In ITS, one of their robots was used to test the same scenario. The evaluations will be explorations of different social cues, and based primarily on qualitative user evaluation.

Results Summary

A collaborative study was conducted by researchers from LMU Munich, Tampere University, and Instituto Superior Tecnico. The aim was to study human perceptions of social cues of non-anthropomorphic robots. A large (>1500 respondents) online survey was run to explore whether people can consistently understand social cues from a non-anthropomorphic robot, such as lights, sounds, and gestures, for example in settings like hospitals, where mobile robots may be used for tasks like cleaning and delivery. Testing a variety of signals in different scenarios, the study reveals significant differences in how people interpret these cues, depending on the context and the type of signal. The findings suggest that robots may need to adapt their signals dynamically to improve human understanding, trust, and acceptance in diverse environments. The study will be extended in 2025 by a robot prototype and a laboratory study of LLM-based social cues that may enhance human-robot interaction. The project was successful and we will continue working on it beyond the lifetime of Humane AI Net.

Tangible Outcomes

  1. [under review] A paper has been submitted to ACM Intelligent User Interfaces, IUI’25. The paper is entitled “Context is Cue-cial: Assessing the Interpretation of Social Signals from Non-Anthropomorphic Robots in Different Contexts”. If the paper is accepted, it will be presented in the conference in March’25.

 

Contact person: Frank Dignum (frank.dignum@umu.se)

Internal Partners:

  1. Umeå University (UMU), Frank Dignum
  2. Örebro University (ORU), Alessandro Safiotti

 

Robots are already in wide use in industrial settings where the interactions with people are well structured and stable. Interactions with robots in home settings are notoriously more difficult. The context of interactions changes over time, depending on the people present, the time of day, the event going on, etc. In order to cope with all these factors creating uncertainty and ambiguity people use practices, norms, conventions, etc. to normalize and package certain interactions into standard types of actions performed in order by the parties involved, e.g., getting coffee.Within this project we explored how the idea of social practices to regulate interactions and create expectations in the parties involved can be used to guide robots in their interactions with people. We explored a simple scenario with a Pepper robot to explore all practical obstacles when using these concepts in robotics.

Results Summary

There is a first prototype of the use of social practices in the interaction between a robot and humans. It is shown that following a social practice can help planning for the interaction. It can also be used to support recovery from deviations of the expected interaction by the human. There is a first representation of social practices in a data structure that is usable by the robot planner. A first version of a planner using the social practice information and an execution process that both executes the plan and monitors the progress of the interaction and adapts or re-plans the robots actions when necessary.

Tangible Outcomes

  1. Video presentation summarizing the project

Contact person: Eugenia Polizzi

Internal Partners:

  1. Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  2. Fondazione Bruno Kessler (FBK), Marco Pistore

 

The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms was explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Results Summary

In this MP, we diagnosed and visualized a map of existing social norms underlying fake news related to COVID19. Through the analysis of millions of geolocated tweets collected during the Covid-19 pandemic we were able to identify the existence of structural and functional network features supporting an “illusion of the majority” on Twitter. Our results suggest that the majority of fake (and other) contents related to the pandemic are produced by a minority of users and that there is a structural segmentation in a small “core” of very active users responsible for large amount of fake news and a larger “periphery” that mainly retweets the contents produced by the core. This discrepancy between the size and identity of users involved in the production and diffusion of fake news suggests that a distorted perception of what users believe is the majority opinion may pressure users (especially those in the periphery) to comply with the group norm and further contribute to the spread of misinformation in the network.

Tangible Outcomes

  1. The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing – Piergiorgio Castioni, Giulia Andrighetto, Riccardo Gallotti, Eugenia Polizzi, Manlio De Domenico   https://arxiv.org/abs/2112.01304
  2. Video presentation summarizing the project

 

Contact person: Frank Dignum (dignum@cs.umu.se)

Internal Partners:

  1. Umeå University (UMU), Frank Dignum
  2. Instituto Superior Técnico (IST), Rui Prada, Maria Inês Lobo, and Diogo Rato

 

In order for systems to function effectively in cooperation with humans and other AI systems they have to be aware of their social context. Especially in their interactions they should take into account the social aspects of their context, but also can use their social context to manage the interactions. Using the social context in the deliberation about the interaction steps will allow for an effective and focused dialogue that is geared towards a specific goal that is accepted by all parties in the interactions. In this project we started with the Dialogue Trainer system that allows for authoring very simple but directed dialogues to train (medical) students to have effective conversations with patients. Based on this tool, in which social context is taken into account only through the authors of the dialogue, we designed a system that will actually deliberate about the social context.

Results Summary

The MP addresses the following limitations of scripted dialogue training systems:

  • Dialogue is not self-made: players are unable to learn relevant communication skills
  • Dialogue is predetermined: agent does not need to adapt to changes in the context
  • Dialogue tree is very large: editor may have difficulty managing the dialogue

Therefore, this project’s goal is the creation of a flexible dialogue system, in which a socially aware conversational agent will deliberate and provide context-appropriate responses to users, based on defined social practices, identities, values, or norms. Scenarios in this dialogue system should be easy to author as well.

The main result is a Python prototype of a dialogue system with an architecture based on Cognitive Social Frames and Social Practices, whose dialogue scenarios are easy to edit in a widely used tool called Twine. We also submitted a workshop paper.

Tangible Outcomes

  1. Socially Aware Interactions: From Dialogue Trees to Natural Language Dialogue Systems. I. Lobo, D. Rato, R. Prada, F. Dignum In: , et al. Chatbot Research and Design. CONVERSATIONS 2021. Lecture Notes in Computer Science(), vol 13171. Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-030-94890-0_8
  2. Prototype of dialogue system – ines.lobo@tecnico.ulisboa.pt https://github.com/GAIPS/socially-aware-interactions
  3. Video presentation summarizing the project