We focus on studying prediction problems from event sequences. The latter are ubiquitous in several scenario involving human activities, including especially information diffusion in social media.

The scope of the MP is to investigate methods for learning deep probabilistic models based on latent representations that can explain and predict event evolution within social media. Latent variables are particularly promising in situations where the level of uncertainty is high, due to their capabilities in modeling the hidden causal relationships that characterize data and ultimately guarantee robustness and trustability in decisions. In addition, probabilistic models can efficiently support simulation, data generation and different forms of collaborative human-machine reasoning.

There are several reasons why this problem is challenging. We plan to study these challenges and provide an overview of the current advances, as well as a repository of available techniques and datasets that can be exploited for research and study.

Output

A journal paper reviewing the current issues and challenges

A repository of the existing methods, with their implementations and available datasets

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Giuseppe Manco
  • INESC TEC, Joao Gama
  • Università di Pisa (UNIPI), Dino Pedreschi

Primary Contact: Giuseppe Manco, CNR

The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms will be explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Output

Diagnosis and visualization map of existing social norms underlying fake news related to COVID19

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  • Fondazione Bruno Kessler (FBK), Marco Pistore

Primary Contact: Eugenia Polizzi, CNR-ISTC

Main results of micro project:

Through the analysis of millions of geolocated tweets collected during the Covid-19 pandemic we were able to identify the existence of structural and functional network features supporting an “illusion of the majority" on Twitter. Our results suggest that the majority of fake (and other) contents related to the pandemic are produced by a minority of users and that there is a structural segmentation in a small “core” of very active users responsible for large amount of fake news and a larger "periphery" that mainly retweets the contents produced by the core. This discrepancy between the size and identity of users involved in the production and diffusion of fake news suggests that a distorted perception of what users believe is the majority opinion may pressure users (especially those in the periphery) to comply with the group norm and further contribute to the spread of misinformation in the network.

Contribution to the objectives of HumaneAI-net WPs

Top-down “debunking” interventions have been applied to limit the spread of fake news, but so far with limited power. Recognizing the role of social norms in the context of misinformation fight may offer a novel approach to solve such a challenge, shifting to bottom-up solutions that help people to correct misperceptions about how widely certain opinions are truly held. The results of this microproject can inform new strategies to improve the quality of debates in online communities and counteract polarization in online communities (WP4). These results can be also relevant for WP2 (T 2.4), e.g., by giving insights about how human interactions can influence and are influenced by AI technology, WP3 (T 3.3) by offering tools to study the reactions of humans within hybrid human-AI systems and WP5 (T 5.4) by evaluating the role of social norms dynamics for a responsible development of AI technology.

Tangible outputs

  • Publication: The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing – Piergiorgio Castioni, Giulia Andrighetto, Riccardo Gallotti, Eugenia Polizzi, Manlio De Domenico
    https://arxiv.org/abs/2112.01304

Study of emergent collective phenomena at metropolitan level in personal navigation assistance systems with different recommendation policies, with respect to different collective optimization criteria (fluidity of traffic, safety risks, environmental sustainability, urban segregation, response to emergencies, …).

Idea: (1) start from real big mobility data (massive datasets of GPS trajectories at metropolitan level from onboard black-boxes, recorded for insurance purposes), (2) identify major road blocks events (accidents, extraordinary events, …) in data, (3) simulate the effect (modify the data) that users involved in a road block were previously supported by navigation systems the employ policies to mitigate the impact of the block, by using different policies different from individual optimization, aiming at collective optimization (aiming at diversity, randomization, safety, resilience, etc.)

Compare the impact of the different choices in term of aggregated impact.

Output

(Big-) data-driven simulations with scenario assessment

Scientific paper

Presentations

Project Partners:

  • Università di Pisa (UNIPI), Dino Pedreschi
  • Consiglio Nazionale delle Ricerche (CNR), Mirco Nanni
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowicz

Primary Contact: Mirco Nanni, CNR

Social dilemmas are situations in which the interests of the individuals conflict with those of the team, and in which maximum benefit can be achieved if enough individuals adopt prosocial behavior (i.e. focus on the team’s benefit at their own expense). In a human-agent team, the adoption of prosocial behavior is influenced by various features displayed by the artificial agent, such as transparency, or small talk. One feature still unstudied is expository communication, meaning communication performed with the intent of providing factual information without favoring any party.

We will implement a public goods game with information asymmetry (i.e. agents in the game do not have the same information about the environment) and perform a user-study in which we will manipulate the amount of information that the artificial agent provides to the team, and examine how varying levels of information increase or decrease human prosocial behavior.

Output

Submission to one of the following: International Journal of Social Robotics, Behaviour & Information Technology, AAMAS, or CHI. Submission to be sent by the end of August 2021.

Release of the game developed for the study on the AI4EU platform to allow other researchers to use it and extend it

Educational component on the Ethical aspect of AI, giving a concrete example on how AI can “manipulate” a human

Presentations

Project Partners:

  • Örebro University (ORU), Jennifer Renoux
  • Instituto Superior Técnico (IST), Ana Paiva

Primary Contact: Jennifer Renoux, Örebro University

Main results of micro project:

This micro-project has led to the design and development of an experimental platform to test how communication from an artificial agent influences a human's pro-social behavior.
The platform comprises the following components:

– a fully configurable mixed-motive public good game, allowing a single human player to play with artificial agents, and an artificial "coach" giving feedback on the human's action. Configuration is made through json files (number and types of agents, type of feedback, game configuration…)

– a set of questionnaires designed to evaluate the prosocial behavior of the human player during a game

Contribution to the objectives of HumaneAI-net WPs

This project contributes to WP3 and WP4.
The study carried during the micro-project will give insight on how an artificial agent may influence a human's behavior in a social-dilemna context, thus allowing for informed design and development of such artificial agent.
In addition, the platform developed will be made available publicly, allowing future researchers to experiment on other configurations and other types of feedback. By using a well-development and consistent platform, the results of different studies will be more easily comparable.

Tangible outputs

  • Program/code: The Pest Control Game experimental platform – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    https://github.com/jrenoux/humane-ai-sdia.git
  • Publication: nternational Journal of Social Robotics or Behaviour & Information Technology – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    In preparation

Creation of stories and narrative from data of Cultural Heritage

in this activity we want by tackle the fundamental dishomogeneity of thecultural heritage data is by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligence

Output

Publications about maps of Social Interactions across ages

Publication about AI algorithm for the automatic classification of documents

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  • Università di Pisa (UNIPI), Dino Pedreschi
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowitz

Primary Contact: Guido Caldarelli, CNR

Recent polarisation of opinions in society has triggered a lot of research into the mechanisms involved. Personalised recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias. In a recent work [1] we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment. In this project we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

[1] Sîrbu, A., Pedreschi, D., Giannotti, F. and Kertész, J., 2019. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3), p.e0213246.

Output

A paper on opinion dynamics in a complex systems or interdisciplinary journal.

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Giulio Rossetti
  • Central European University (CEU), Janos Kertesz
  • Università di Pisa (UNIPI), Alina Sirbu

Primary Contact: Giulio Rossetti, Consiglio Nazionale delle Ricerche, Pisa, Italy

Main results of micro project:

The project has run for less than 50% of its allocated time (it started on the 1st of July and will run for 4 months).

So far the algorithmic bias model has been extended to integrate media effects and preliminary correctness tests have been performed.
Moreover, the experimental settings have been fixed and a first preliminary analysis of initial results performed.

Contribution to the objectives of HumaneAI-net WPs

The recent polarization of opinions in society has triggered a lot of research into the mechanisms involved. Personalized recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias.

In recent work we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment.

In this project, we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

Tangible outputs

Attachments

RPReplay-Final1634060242_Berlin.mov

The project aims at investigating systems composed by a large number of agents belonging to either human or artificial type. The plan is to study, both from the static and the dynamical point of view, how such a two-populated system reacts to changes in the parameters especially in view of possible abrupt transitions. We are planning to pay special attention to higher order interactions like three body effects (H-H-H, H-H-AI, H-AI-AI and AI-AI-AI). We hypothesize that such interactions are crucial for the understanding of complex Human-AI systems. We will analyze the static properties both from the direct and inverse problem perspective. This study will pave the way for further investigation of the system in its dynamic evolution by means of correlations and temporal motifs.

Output

1 paper in a complex systems (or physics or math) journal

Project Partners:

  • Università di Bologna (UNIBO), Pierluigi Contucci
  • Central European University (CEU), Janos Kertesz

Primary Contact: Pierluigi Contucci, University of Bologna

Attachments

Contucci MP-UNIBO-CEU_March17.mov

We envision a human-AI ecosystem in which AI-enabled devices act as proxies of humans and try to learn collectively a model in a decentralized way. Each device will learn a local model that needs to be combined with the models learned by the other nodes, in order to improve both the local and global knowledge. The challenge of doing so in a fully-decentralized AI system entails understanding how to compose models coming from heterogeneous sources and, in case of potentially untrustworthy nodes, decide who can be trusted and why. In this micro-project, we focus on the specific scenario of model “gossiping” for accomplishing a decentralized learning task and we study what models emerge from the combination of local models, where combination takes into account the social relationships between the humans associated with the AI. We will use synthetic graphs to represent social relationships, and large-scale simulation for performance evaluation.

Output

Paper (most likely at conference/workshop, possibly journal)

Simulator (fallback plan if a paper cannot be produced at the end of the micro-project)

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Andrea Passarella
  • Central European University (CEU), Gerardo Iniguez

Primary Contact: Andrea Passarella, CNR-IIT

Main results of micro project:

As of now, the micro project has developed a modular simulation framework to test decentralised machine learning algorithms on top of large-scale complex social networks. The framework is written in Python, exploiting state-of-the-art libraries such as networkx (to generate network models) and Pytorch (to implement ML models). The simulator is modular, as it accepts networks in the form of datasets as well as synthetic models. Local data are allocated on each node, which trains a local ML model of choice. Communication rounds are implemented, through which local models are aggregated and re-trained based on local data. Benchmarks are included, namely federated learning and centralised learning. Initial simulation results have been derived, to assess the accuracy of decentralised learning (social AI gossiping) on Barabasi-Albert networks, showing that social AI gossiping is able to achieve comparable accuracy with respect to centralised and federated learning versions (which rely on centralised elements, though).

Contribution to the objectives of HumaneAI-net WPs

The simulation engine is a modular one, that can be exploited (also by the other project partners) to test decentralised ML solutions. The weighted network used to connect nodes can represent social relationships between users, and thus one of the main objectives of the obtained results it to understand the social network effects on decentralised ML tasks.

Tangible outputs

Attachments

MP_social_AI_gossiping_CNR_CEU_presentation.pptx_Berlin.pps

The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms will be explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Output

Diagnosis and visualization map of existing social norms underlying fake news related to COVID19

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  • Fondazione Bruno Kessler (FBK), Marco Pistore

 

Primary Contact: Eugenia Polizzi, CNR-ISTC

In order for systems to function effectively in cooperations with humans and other AI systems they have to be aware of their social context. Especially in their interactions they should take into account the social aspects of their context, but also can use their social context to manage the interactions. Using the social context in the deliberation about the interaction steps will allow for an effective and focused dialogue that is geared towards a specific goal that is accepted by all parties in the interactions.

In this project we will start with the Dialogue Trainer system that allows for authoring very simple but directed dialogues to train (medical) students to have effective conversations with patients. Based on this tool, in which social context is taken into account only through the authors of the dialogue, we will design a system that will actually deliberate about the social context.

Output

software prototype for a flexible dialogue trainer system

CONVERSATIONS workshop paper 2021

Presentations

Project Partners:

  • Umeå University (UMU), Frank Dignum
  • Instituto Superior Técnico (IST), Rui Prada

Primary Contact: Frank Dignum, Umeå University

Main results of micro project:

The "Socially Aware Interactions" micro-project aims to address the following limitations of scripted dialogue training systems:

– Dialogue is not self-made: players are unable to learn relevant communication skills
– Dialogue is predetermined: agent does not need to adapt to changes in the context
– Dialogue tree is very large: editor may have difficulty managing the dialogue

Therefore, this project's goal is the creation of a flexible dialogue system, in which a socially aware conversational agent will deliberate and provide context-appropriate responses to users, based on defined social practices, identities, values, or norms. Scenarios in this dialogue system should be easy to author as well.

The main result is a Python prototype of a dialogue system with an architecture based on Cognitive Social Frames and Social Practices, whose dialogue scenarios are easy to edit in a widely used tool called Twine. We also submitted a workshop paper.

Contribution to the objectives of HumaneAI-net WPs

First, the dialogue system's flexibility and context-awareness will make the conversational agent appear more natural/realistic to the user, which is significant for the "Human-AI collaboration and interaction" work package.

Furthermore, in the system, the agent and the human user, besides having their own individual goals, are also attempting to achieve a dialogue goal together (e.g., in an anamnesis scenario, the main goal could be to obtain/give a diagnosis), which satisfies the "Societal AI" work package's goal "AI systems' individual vs collective goals".

This last work package includes the goal "Multimodal perception of awareness, emotions, and attitudes" as well, which is met because the agent adapts to changes in context, deliberating on top of it, and becoming more socially aware.

Tangible outputs