The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms will be explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Output

Diagnosis and visualization map of existing social norms underlying fake news related to COVID19

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  • Fondazione Bruno Kessler (FBK), Marco Pistore

 

Primary Contact: Eugenia Polizzi, CNR-ISTC

Main results of micro project:

Through the analysis of millions of geolocated tweets collected during the Covid-19 pandemic we were able to identify the existence of structural and functional network features supporting an “illusion of the majority” on Twitter. Our results suggest that the majority of fake (and other) contents related to the pandemic are produced by a minority of users and that there is a structural segmentation in a small “core” of very active users responsible for large amount of fake news and a larger “periphery” that mainly retweets the contents produced by the core. This discrepancy between the size and identity of users involved in the production and diffusion of fake news suggests that a distorted perception of what users believe is the majority opinion may pressure users (especially those in the periphery) to comply with the group norm and further contribute to the spread of misinformation in the network.

Contribution to the objectives of HumaneAI-net WPs

Top-down “debunking” interventions have been applied to limit the spread of fake news, but so far with limited power. Recognizing the role of social norms in the context of misinformation fight may offer a novel approach to solve such a challenge, shifting to bottom-up solutions that help people to correct misperceptions about how widely certain opinions are truly held. The results of this microproject can inform new strategies to improve the quality of debates in online communities and counteract polarization in online communities (WP4). These results can be also relevant for WP2 (T 2.4), e.g., by giving insights about how human interactions can influence and are influenced by AI technology, WP3 (T 3.3) by offering tools to study the reactions of humans within hybrid human-AI systems and WP5 (T 5.4) by evaluating the role of social norms dynamics for a responsible development of AI technology.

Tangible outputs

  • Publication: The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing – Piergiorgio Castioni, Giulia Andrighetto, Riccardo Gallotti, Eugenia Polizzi, Manlio De Domenico
    https://arxiv.org/abs/2112.01304

Study of emergent collective phenomena at metropolitan level in personal navigation assistance systems with different recommendation policies, with respect to different collective optimization criteria (fluidity of traffic, safety risks, environmental sustainability, urban segregation, response to emergencies, …).

Idea: (1) start from real big mobility data (massive datasets of GPS trajectories at metropolitan level from onboard black-boxes, recorded for insurance purposes), (2) identify major road blocks events (accidents, extraordinary events, …) in data, (3) simulate the effect (modify the data) that users involved in a road block were previously supported by navigation systems the employ policies to mitigate the impact of the block, by using different policies different from individual optimization, aiming at collective optimization (aiming at diversity, randomization, safety, resilience, etc.)

Compare the impact of the different choices in term of aggregated impact.

Output

(Big-) data-driven simulations with scenario assessment

Scientific paper

Presentations

Project Partners:

  • Università di Pisa (UNIPI), Dino Pedreschi
  • Consiglio Nazionale delle Ricerche (CNR), Mirco Nanni
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowicz

 

Primary Contact: Mirco Nanni, CNR

Social dilemmas are situations in which the interests of the individuals conflict with those of the team, and in which maximum benefit can be achieved if enough individuals adopt prosocial behavior (i.e. focus on the team’s benefit at their own expense). In a human-agent team, the adoption of prosocial behavior is influenced by various features displayed by the artificial agent, such as transparency, or small talk. One feature still unstudied is expository communication, meaning communication performed with the intent of providing factual information without favoring any party.

We will implement a public goods game with information asymmetry (i.e. agents in the game do not have the same information about the environment) and perform a user-study in which we will manipulate the amount of information that the artificial agent provides to the team, and examine how varying levels of information increase or decrease human prosocial behavior.

Output

Submission to one of the following: International Journal of Social Robotics, Behaviour & Information Technology, AAMAS, or CHI. Submission to be sent by the end of August 2021.

Release of the game developed for the study on the AI4EU platform to allow other researchers to use it and extend it

Educational component on the Ethical aspect of AI, giving a concrete example on how AI can “manipulate” a human

Presentations

Project Partners:

  • Örebro University (ORU), Jennifer Renoux
  • Instituto Superior Técnico (IST), Ana Paiva

 

Primary Contact: Jennifer Renoux, Örebro University

Main results of micro project:

This micro-project has led to the design and development of an experimental platform to test how communication from an artificial agent influences a human’s pro-social behavior.
The platform comprises the following components:

– a fully configurable mixed-motive public good game, allowing a single human player to play with artificial agents, and an artificial “coach” giving feedback on the human’s action. Configuration is made through json files (number and types of agents, type of feedback, game configuration…)

– a set of questionnaires designed to evaluate the prosocial behavior of the human player during a game

Contribution to the objectives of HumaneAI-net WPs

This project contributes to WP3 and WP4.
The study carried during the micro-project will give insight on how an artificial agent may influence a human's behavior in a social-dilemna context, thus allowing for informed design and development of such artificial agent.
In addition, the platform developed will be made available publicly, allowing future researchers to experiment on other configurations and other types of feedback. By using a well-development and consistent platform, the results of different studies will be more easily comparable.

Tangible outputs

  • Program/code: The Pest Control Game experimental platform – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    https://github.com/jrenoux/humane-ai-sdia.git
  • Publication: nternational Journal of Social Robotics or Behaviour & Information Technology – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    In preparation

Creation of stories and narrative from data of Cultural Heritage

in this activity we want by tackle the fundamental dishomogeneity of thecultural heritage data is by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligence

Output

Publications about maps of Social Interactions across ages

Publication about AI algorithm for the automatic classification of documents

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  • Università di Pisa (UNIPI), Dino Pedreschi
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowitz

Primary Contact: Guido Caldarelli, CNR

Recent polarisation of opinions in society has triggered a lot of research into the mechanisms involved. Personalised recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias. In a recent work [1] we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment. In this project we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

[1] Sîrbu, A., Pedreschi, D., Giannotti, F. and Kertész, J., 2019. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3), p.e0213246.

Output

A paper on opinion dynamics in a complex systems or interdisciplinary journal.

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Giulio Rossetti
  • Central European University (CEU), Janos Kertesz
  • Università di Pisa (UNIPI), Alina Sirbu

 

Primary Contact: Giulio Rossetti, Consiglio Nazionale delle Ricerche, Pisa, Italy

Main results of micro project:

The project has run for less than 50% of its allocated time (it started on the 1st of July and will run for 4 months).

So far the algorithmic bias model has been extended to integrate media effects and preliminary correctness tests have been performed.
Moreover, the experimental settings have been fixed and a first preliminary analysis of initial results performed.

Contribution to the objectives of HumaneAI-net WPs

The recent polarization of opinions in society has triggered a lot of research into the mechanisms involved. Personalized recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias.

In recent work we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment.

In this project, we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

Tangible outputs

Attachments

RPReplay-Final1634060242_Berlin.mov

This proposal wants to conduct an empirical research that explores the social and public attitudes of individuals towards AI and robots.

AI and robots will enter many more aspects of our daily life than the average citizen is aware of while they are already organizing specific domains such as work, health, security, politics and manufacturing. Along with technological research it is fundamental to grasp and gauge the social implications of these processes and their acceptance into a wider audience.

Some of the research questions are:

Do citizens have a positive or negative attitude about the impact of Ai?

Will they really trust a driverless car or will they passively accept a loan or insurance’s denial based on an algorithmic decision? Do states alone have the right and expertise to regulate the emerging technology and digital infrastructures? What about technology governance?

What are the dominant AI’s narratives in the general public?

Output

Two scientific papers in journals like (depending on peer reviewing guidelines): -Ai & Society (https://www.springer.com/journal/146); -Journal of Artificial intelligence research https://www.jair.org/index.php/jair -Big Data and Society https://journals.sagepub.com/home/bds – also other potential journals will be considered (Social science computer review, Public understanding of science)

Public presentation at scientific (eg. HICSS 2022, AAAI) and more general interest conferences in the second half of 2021 and first half 2022.

Presentations

Project Partners:

  • Università di Bologna (UNIBO), Laura Sartori
  • Umeå University (UMU), Andreas Theodorou
  • Consiglio Nazionale delle Ricerche (CNR), Fosca Giannoti

 

Primary Contact: Laura Sartori, UNIBO – Dept of Political and Social sciences

Main results of micro project:

The Bologna survey collected around 6000 questionnaires. Data analysis on the Bologna case study revelead a quite articulated picture where variables such as gender, generation and competence resulted crucial in the different understaning and knowledge about AI.
AI Narratives sensibly vary across social groups, underlying a different degree of awareness and social acceptance.
The UMEA and CNR surveys had more problems in the collection phase, while the implementation and launch of the surveys were smooth and ontime. In September there will be second round, hoping in a more fruitful data collection.

2 papers submitted to international journals.

Sartori and Theodorou (under review) A sociotechnical perspective for the future of AI: narratives, inequalities, and human control, in Ethics and Information technology.

Sartori and Bocca (under review), Minding the gap(s): public perceptions of AI and socio-technical imaginaries, in AI&Society.

Contribution to the objectives of HumaneAI-net WPs

The microproject addresses issues related to social trust, cohesion, and public perception. It has clarified how and to what degree Ai is accepted by the general public and highlighted the different level of public acceptance of AI across social groups.
Goals of T4.3 are met since the Sartori and Bocca article highlighted how perception and narratives differ by the main sociodemographics variables. Notable is the (expected) gender effect: women do have less knowledge and do trust less AI systems. Especially when it come to depicting future and distopic scenarios, women tend to fear technologies more than men.

Goals of T5.3 are met by the work presented in Sartori and Theodorou.
Focusessing on the main challenges associated with AI as autonomous systems spread within society, the article points to biases and unfariness as being among the major challenges to be addressed in a sociotechnical perspective.

Tangible outputs

  • Publication: A sociotechnical perspective for the future of AI: narratives, inequalities, and human control, in Ethics and Information technology. – Sartori, Laura
    Theodorou, A.
    Accepted in Ethics and Information Technology https://www.springer.com/journal/10676
  • Publication: Minding the gap(s): public perceptions of AI and socio-technical imaginaries – Sartori, Laura
    Bocca, Giulia
    Submitted to AI&Society, https://www.springer.com/journal/146

Attachments

UNIBO_sartori_What idea of AI_141021_Berlin.pptx

In this project we will investigate whether normative behavior can be detected in facebook groups. In a first step we will hypothesize about possible norms that could lead to a group becoming more extreme on social media, or whether groups that become more extreme will develop certain norms that distinguish them from other groups and that could be detected. An example of such a norm could be that a (self-proclaimed) leader of a group is massively supported by retweets, likes or affirmative messages, along with evidences of verbal sanctioning toward counter-normative replies. Simulations and analyses of historical facebook data (using manual detection in specific case studies and more broadly through NLP) will help revealing the existence of normative behavior and its potential change over time.

Output

Report describing guidelines to detect normative behavior on social media platforms

Presentations

Project Partners:

  • Umeå University (UMU), Frank Dignum
  • Consiglio Nazionale delle Ricerche (CNR), Eugenia Polizzi

 

Primary Contact: Frank Dignum, Umeå University

Main results of micro project:

The project delivered detailed analyses of the tweets around the USA elections and subsequent riots. Where we thought we might discover some patterns in the tweets indicating more extreme behavior, it appears that extremist expressions are quickly banned from Twitter and find a home in more niche social platforms (in this case Parler). Thus the main conclusion of this project is that we need to find the connections between users in different social media platforms in order to track any extreme behavior.

Contribution to the objectives of HumaneAI-net WPs

In order to see how individuals might contribute to behavior that is not in the interest of society we cannot analyze one social media platform. Especially more extremist expressions quickly disappear from main stream social media to niche platforms that can quickly change over time. Thus the connection between individual and societal goals is difficult to observe by just analyzing data from a single social media platform. In the other hand it is very difficult to link users between platforms.

Tangible outputs

  • Other: Identification of radical behavior in Parler groups – Frank Dignum
  • Other: Characterizing the language use of radicalized communities detected on Parler – Frank Dignum

The project aims at investigating systems composed by a large number of agents belonging to either human or artificial type. The plan is to study, both from the static and the dynamical point of view, how such a two-populated system reacts to changes in the parameters especially in view of possible abrupt transitions. We are planning to pay special attention to higher order interactions like three body effects (H-H-H, H-H-AI, H-AI-AI and AI-AI-AI). We hypothesize that such interactions are crucial for the understanding of complex Human-AI systems. We will analyze the static properties both from the direct and inverse problem perspective. This study will pave the way for further investigation of the system in its dynamic evolution by means of correlations and temporal motifs.

Output

1 paper in a complex systems (or physics or math) journal

Project Partners:

  • Università di Bologna (UNIBO), Pierluigi Contucci
  • Central European University (CEU), Janos Kertesz

 

Primary Contact: Pierluigi Contucci, University of Bologna

Attachments

Contucci MP-UNIBO-CEU_March17.mov

We envision a human-AI ecosystem in which AI-enabled devices act as proxies of humans and try to learn collectively a model in a decentralized way. Each device will learn a local model that needs to be combined with the models learned by the other nodes, in order to improve both the local and global knowledge. The challenge of doing so in a fully-decentralized AI system entails understanding how to compose models coming from heterogeneous sources and, in case of potentially untrustworthy nodes, decide who can be trusted and why. In this micro-project, we focus on the specific scenario of model “gossiping” for accomplishing a decentralized learning task and we study what models emerge from the combination of local models, where combination takes into account the social relationships between the humans associated with the AI. We will use synthetic graphs to represent social relationships, and large-scale simulation for performance evaluation.

Output

Paper (most likely at conference/workshop, possibly journal)

Simulator (fallback plan if a paper cannot be produced at the end of the micro-project)

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Andrea Passarella
  • Central European University (CEU), Gerardo Iniguez

 

Primary Contact: Andrea Passarella, CNR-IIT

Main results of micro project:

As of now, the micro project has developed a modular simulation framework to test decentralised machine learning algorithms on top of large-scale complex social networks. The framework is written in Python, exploiting state-of-the-art libraries such as networkx (to generate network models) and Pytorch (to implement ML models). The simulator is modular, as it accepts networks in the form of datasets as well as synthetic models. Local data are allocated on each node, which trains a local ML model of choice. Communication rounds are implemented, through which local models are aggregated and re-trained based on local data. Benchmarks are included, namely federated learning and centralised learning. Initial simulation results have been derived, to assess the accuracy of decentralised learning (social AI gossiping) on Barabasi-Albert networks, showing that social AI gossiping is able to achieve comparable accuracy with respect to centralised and federated learning versions (which rely on centralised elements, though).

Contribution to the objectives of HumaneAI-net WPs

The simulation engine is a modular one, that can be exploited (also by the other project partners) to test decentralised ML solutions. The weighted network used to connect nodes can represent social relationships between users, and thus one of the main objectives of the obtained results it to understand the social network effects on decentralised ML tasks.

Tangible outputs

Attachments

MP_social_AI_gossiping_CNR_CEU_presentation.pptx_Berlin.pps

in this activity we want by tackle the fundamental dishomogeneity of thecultural heritage data is by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligence.

Output

Database usable from the people in the Consortium as a pilot case

Paper on the topic

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  • Consiglio Nazionale delle Ricerche (CNR), Antonio Scala
  • Consiglio Nazionale delle Ricerche (CNR), Emilia La Nave

 

Primary Contact: Guido Caldarelli, CNR/ISC

The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms will be explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Output

Diagnosis and visualization map of existing social norms underlying fake news related to COVID19

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  • Fondazione Bruno Kessler (FBK), Marco Pistore

 

Primary Contact: Eugenia Polizzi, CNR-ISTC

In order for systems to function effectively in cooperations with humans and other AI systems they have to be aware of their social context. Especially in their interactions they should take into account the social aspects of their context, but also can use their social context to manage the interactions. Using the social context in the deliberation about the interaction steps will allow for an effective and focused dialogue that is geared towards a specific goal that is accepted by all parties in the interactions.

In this project we will start with the Dialogue Trainer system that allows for authoring very simple but directed dialogues to train (medical) students to have effective conversations with patients. Based on this tool, in which social context is taken into account only through the authors of the dialogue, we will design a system that will actually deliberate about the social context.

Output

software prototype for a flexible dialogue trainer system

CONVERSATIONS workshop paper 2021

Presentations

Project Partners:

  • Umeå University (UMU), Frank Dignum
  • Instituto Superior Técnico (IST), Rui Prada

 

Primary Contact: Frank Dignum, Umeå University

Main results of micro project:

The "Socially Aware Interactions" micro-project aims to address the following limitations of scripted dialogue training systems:

– Dialogue is not self-made: players are unable to learn relevant communication skills
– Dialogue is predetermined: agent does not need to adapt to changes in the context
– Dialogue tree is very large: editor may have difficulty managing the dialogue

Therefore, this project's goal is the creation of a flexible dialogue system, in which a socially aware conversational agent will deliberate and provide context-appropriate responses to users, based on defined social practices, identities, values, or norms. Scenarios in this dialogue system should be easy to author as well.

The main result is a Python prototype of a dialogue system with an architecture based on Cognitive Social Frames and Social Practices, whose dialogue scenarios are easy to edit in a widely used tool called Twine. We also submitted a workshop paper.

Contribution to the objectives of HumaneAI-net WPs

First, the dialogue system's flexibility and context-awareness will make the conversational agent appear more natural/realistic to the user, which is significant for the "Human-AI collaboration and interaction" work package.

Furthermore, in the system, the agent and the human user, besides having their own individual goals, are also attempting to achieve a dialogue goal together (e.g., in an anamnesis scenario, the main goal could be to obtain/give a diagnosis), which satisfies the "Societal AI" work package's goal "AI systems' individual vs collective goals".

This last work package includes the goal "Multimodal perception of awareness, emotions, and attitudes" as well, which is met because the agent adapts to changes in context, deliberating on top of it, and becoming more socially aware.

Tangible outputs