Build human-in-the-loop intelligent systems for the geolocation of social media images in natural disasters

Social media generate large amounts of almost real-time data which can turn out extremely valuable in an emergency situation, specially for providing information within the first 72 hours after a disaster event. Despite there is abundant state-of-the-art machine learning techniques to automatically classify social media images and some work for geolocating them, the operational problem in the event of a new disaster remains unsolved.
Currently the state-of-the-art approach for dealing with these first response mapping is first filtering and then submitting the images to be geolocated to a crowd of volunteers [1], assigning the images randomly to the volunteers.

The project is aimed at leveraging the power of crowdsourcing and artificial intelligence (AI) to assist emergency responders and disaster relief organizations in building a damage map from a zone recently hit by a disaster.

Specifically, the project will involve the development of a platform that can intelligently distribute geolocation tasks to a crowd of volunteers based on their skills. The platform will use machine learning to determine the skills of the volunteers based on previous geolocation experiences.

Thus, the project will concentrate on two different tasks:
• Profile Learning. Based on the previous geolocations of a set of volunteers, learn a profile of each of the volunteers which encodes its geolocation capabilities. This profiles should be unterstood as competency maps of the volunteer, representing the capability of the volunteer to provide an accurate geolocation for an image coming from a specific geographical area.
• Active Task Assigment. Use the volunteer profiles efficiently in order to maximize the geolocation quality while maintaining a fair distribution of geolocation tasks among volunteers.

On a first stage we envision an experimental framework with realistically generated artificial data, which acts as a feasibility study. This will be published as a paper in a major conference or journal. Simultaneously we plan to integrate both the profile learning and the active task assignment with the crowdnalysis library, a software outcome of our previous micro-project. Furthermore, we plan to organize a geolocation workshop to take place in Barcelona with participation from the JRC, University of Geneva, United Nations, and IIIA-CSIC.

In the near future, the system will generate reports and visualizations to help these organizations quickly understand the distribution of damages. The resulting platform could enable more efficient and effective responses to natural disasters, potentially saving lives and reducing the impact of these events on communities.
The microproject will be developed by IIIA-CSIC and the University of Geneva. The micro project is also of interest to the team lead by Valerio Lorini at the Joint Research Center of the European Commission @ Ispra, Italy, who will most likely attend the geolocation workshop which we will be putting forward.

The project is in line with “Establishing Common Ground for Collaboration with AI Systems (WP 1-2)”, because it is a microproject that ” that seeks to provide practical demonstrations, tools, or new theoretical models for AI systems that can collaborate with and empower individuals or groups of people to attain shared goals” as is specifically mentioned in the Call for Microprojects.

The project is also in line with “Measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale (WP4)” since it ecomprises the design of a human-centered AI architectures that balance individual and collective goals for the task of geolocation.

[1] Fathi, Ramian, Dennis Thom, Steffen Koch, Thomas Ertl, and Frank Fiedrich. “VOST: A Case Study in Voluntary Digital Participation for Collaborative Emergency Management.” Information Processing & Management 57, no. 4 (July 1, 2020): 102174. https://doi.org/10.1016/j.ipm.2019.102174.

Output

– Open source implementation of the volunteer profiling and consensus geolocation algorithms into the crowdnalysis library.
– Paper with the evaluation of the different geolocation consensus and active strategies for geolocation
– Organization of a one day workshop with United Nations, JRC, University of Geneva, CSIC

Project Partners

  • Consejo Superior de Investigaciones Científicas (CSIC), Jesus Cerquides
  • University of Geneva, Jose Luis Fernandez Marquez

Primary Contact

Jesus Cerquides, Consejo Superior de Investigaciones Científicas (CSIC)

Analyzing perceived task difficulty measured through EEG and reliance behavior in a human-AI decision-making experiment

For AI to be useful in high-stakes applications, humans need to be able to recognize when they can trust AI and when not. However, in many cases, humans tend to overrely on AI, i.e. people adopt AI recommendations even when they are not adequate. Explanations of AI output are meant to help users calibrate their trust in AI to appropriate levels, but often even increase overreliance. In this microproject, we aim to investigate whether and how overreliance on AI and the effect of explanations depend on the perceived task difficulty. We plan to run an AI-assisted decision-making experiment where participants are asked to solve a series of decision tasks (particular type of task to be determined) with support by an AI model, both with and without explanations. Along with participants' reliance behavior, we will measure perceived task difficulty for each task and participant through EEG data.

The results of this study will contribute to the call topic by enhancing our understanding of how human decision-making is impacted by AI decision support. We expect to gain insights about how human decision-making competency can be complemented by AI while preserving human agency.

Output

Publication at a high-profile HCI conference like CHI or IUI.

Project Partners

  • fortiss GmbH, Tony Zhang
  • Türkiye Bilimsel ve Teknolojik Araştırma Kurumu (TUBITAK), Sencer Melih Deniz

Primary Contact

Tony Zhang, fortiss GmbH

Build human-in-the-loop intelligent systems for the geolocation of social media images in natural disasters

Social media generate large amounts of almost real-time data which can turn out extremely valuable in an emergency situation, specially for providing information within the first 72 hours after a disaster event. Despite there is abundant state-of-the-art machine learning techniques to automatically classify social media images and some work for geolocating them, the operational problem in the event of a new disaster remains unsolved.
Currently the state-of-the-art approach for dealing with these first response mapping is first filtering and then submitting the images to be geolocated to a crowd of volunteers [1], assigning the images randomly to the volunteers.

The project is aimed at leveraging the power of crowdsourcing and artificial intelligence (AI) to assist emergency responders and disaster relief organizations in building a damage map from a zone recently hit by a disaster.

Specifically, the project will involve the development of a platform that can intelligently distribute geolocation tasks to a crowd of volunteers based on their skills. The platform will use machine learning to determine the skills of the volunteers based on previous geolocation experiences.

Thus, the project will concentrate on two different tasks:
• Profile Learning. Based on the previous geolocations of a set of volunteers, learn a profile of each of the volunteers which encodes its geolocation capabilities. This profiles should be unterstood as competency maps of the volunteer, representing the capability of the volunteer to provide an accurate geolocation for an image coming from a specific geographical area.
• Active Task Assigment. Use the volunteer profiles efficiently in order to maximize the geolocation quality while maintaining a fair distribution of geolocation tasks among volunteers.

On a first stage we envision an experimental framework with realistically generated artificial data, which acts as a feasibility study. This will be published as a paper in a major conference or journal. Simultaneously we plan to integrate both the profile learning and the active task assignment with the crowdnalysis library, a software outcome of our previous micro-project. Furthermore, we plan to organize a geolocation workshop to take place in Barcelona with participation from the JRC, University of Geneva, United Nations, and IIIA-CSIC.

In the near future, the system will generate reports and visualizations to help these organizations quickly understand the distribution of damages. The resulting platform could enable more efficient and effective responses to natural disasters, potentially saving lives and reducing the impact of these events on communities.
The microproject will be developed by IIIA-CSIC and the University of Geneva. The micro project is also of interest to the team lead by Valerio Lorini at the Joint Research Center of the European Commission @ Ispra, Italy, who will most likely attend the geolocation workshop which we will be putting forward.

The project is in line with "Establishing Common Ground for Collaboration with AI Systems (WP 1-2)", because it is a microproject that " that seeks to provide practical demonstrations, tools, or new theoretical models for AI systems that can collaborate with and empower individuals or groups of people to attain shared goals" as is specifically mentioned in the Call for Microprojects.

The project is also in line with "Measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale (WP4)" since it ecomprises the design of a human-centered AI architectures that balance individual and collective goals for the task of geolocation.

[1] Fathi, Ramian, Dennis Thom, Steffen Koch, Thomas Ertl, and Frank Fiedrich. “VOST: A Case Study in Voluntary Digital Participation for Collaborative Emergency Management.” Information Processing & Management 57, no. 4 (July 1, 2020): 102174. https://doi.org/10.1016/j.ipm.2019.102174.

Output

– Open source implementation of the volunteer profiling and consensus geolocation algorithms into the crowdnalysis library.
– Paper with the evaluation of the different geolocation consensus and active strategies for geolocation
– Organization of a one day workshop with United Nations, JRC, University of Geneva, CSIC

Project Partners

  • Consejo Superior de Investigaciones Científicas (CSIC), Jesus Cerquides
  • University of Geneva, Jose Luis Fernandez Marquez

Primary Contact

Jesus Cerquides, Consejo Superior de Investigaciones Científicas (CSIC)

We want to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations to start new micro-projects along the lines determined by the workshop.

The goals of the workshop are as follows. We want to
(a) to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations and
(b) to start new micro-projects along the lines determined by the workshop as soon as possible.
In the 2003 NSF call (NSF 03-611) that was ultimately canceled, it realized that „The future and well-being of the Nation depend on the effective integration of Information Technologies (IT) into its various enterprises and social fabric.” The call also stated, „We have great difficulty predicting or even clearly assessing social and economic implications and we have limited understanding of the processes by which these transformations occur.”

These arguments are even stronger in relation to AI as the transition is occurring rapidly, forecasting is prone to significant uncertainty and even if they fail to a large extent, the impact will be serious, and beyond the regulatory (societal) policy, there will be a huge demand for assisting people and limiting societal turbulences.
Regulated AI.

Regulation of public use of AI might succeed. Present efforts, such as „MEPs push to bring chatbots like ChatGPT in line with EU's fundamental rights” have, however several shortcomings. Consider the easy-to-retrain Alpaca, a „A Strong, Replicable Instruction-Following Model” and similar open-source efforts that will follow and offer effective and inexpensive tools for “rule breakers”.

Rule breakers can use peer-to-peer BitTorrent methods for hiding the origin of the source, create artificial identities, enter social networks, find echo chambers, and spread fake information efficiently. Automation of misinformation (trolling, conspiracy theory, improved tools for influencing) and the social networks of different kinds cast doubt on the effectiveness of regulation efforts. While regulations seem necessary, still, as regulations are control means, delay in the control may cause instabilities.
Assistive AI.
Regulations for a community-serving “Assistive AI” (AAI), however, can be developed. AAI could diminish the harmful effects provided that efficient verification methods are available. Our early method [1] is a promising starting point as it preserves anonymity for contributing participants who need it or would like it to stay anonymous and can stay anonymous as long as the rules of the community and the law allow it. Accountability can also be included to make contributors responsible.

No matter if regulated AI succeeds or not, it is time to develop qualified AAI tools for society. To-do’s.
• Assistance in overcoming the trauma of unemployment and finding suitable activities,
• Improving our ability to filter out fake news and promoting high-quality (verifiable) sources.
• Assistance with training tailored to individual goals, skills, and realities.
• Help in overcoming stress-related problems, or at least mitigating the effects.
• Assistance in planning in uncertainties.
• Assisting in education and learning [2]
• Assisting in using AI and understanding the moral versus utilitarian consequences of AI-related decisions
• Development of inherently consistent LLMs e.g., by means of Composite AI [3] or autoformalization [4]
Plans
Eötvös University has a history of considering AI-related ethical and legal issues and has experts in social psychology and the labor market.

Planned method: SWOT analysis.
The planned place is Budapest, and the workshop can be monitored online.
Planned date: early September.
References
[1] Ziegler, G et al. "A framework for anonymous but accountable self-organizing communities." Inf. Softw. Tech. 48, (2006): 726
[2] Sal Kahn on Khanmingo. https://www.youtube.com/watch?v=hJP5GqnTrNo (2023)
[3] Gartner Res. Innovation Insight for Composite AI (2022)
[4] Wu, Y. et al. "Autoformalization with large language models." NeurIPS 35 (2022): 32353

Output

Expert opinion and plan for future micro-projects

Project Partners

  • Siemens, same
  • Volkswagen AG, same
  • Law School, Eötvös Loránd University, same
  • Faculty of Education and Psychology, Eötvös Loránd University, same
  • WP5 people, TBD

Primary Contact

András Lőrincz, Siemens

We focus on studying prediction problems from event sequences. The latter are ubiquitous in several scenario involving human activities, including especially information diffusion in social media.

The scope of the MP is to investigate methods for learning deep probabilistic models based on latent representations that can explain and predict event evolution within social media. Latent variables are particularly promising in situations where the level of uncertainty is high, due to their capabilities in modeling the hidden causal relationships that characterize data and ultimately guarantee robustness and trustability in decisions. In addition, probabilistic models can efficiently support simulation, data generation and different forms of collaborative human-machine reasoning.

There are several reasons why this problem is challenging. We plan to study these challenges and provide an overview of the current advances, as well as a repository of available techniques and datasets that can be exploited for research and study.

Output

A journal paper reviewing the current issues and challenges

A repository of the existing methods, with their implementations and available datasets

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Giuseppe Manco
  • INESC TEC, Joao Gama
  • Università di Pisa (UNIPI), Dino Pedreschi

 

Primary Contact: Giuseppe Manco, CNR

Results Description

Our microproject aims at investigating methods for modeling event interactions through temporal processes. We revisited the notion of event modeling and provided the mathematical foundations that characterize the literature on the topic. We defined an ontology to categorize the existing approaches in terms of three families: simple, marked, and spatio-temporal point processes. For each family, we systematically reviewed the existing approaches providing a deep discussion. Specifically, we investigated recent machine and deep learning-based methods for modeling temporal processes. We focused on studying prediction problems from event sequences to understand their structural and temporal dynamics. In fact, understanding these dynamics can provide insights into the complex patterns that govern the process and can be used to forecast future events. Among existing approaches, we investigated probabilistic models based on latent representations that represent an appropriate choice to model event sequences. Event sequences are pervasive in several application contexts, such as business processes, smart industry as well as scenarios involving human activities, including especially information diffusion in social media. Indeed, our study has been focused on works whose aim is the prediction of events within social media. Social media focus on the interactions among individuals within context-sharing platforms such as Twitter, Instagram, etc. Interactions can be modeled as event sequences since events can be user actions over time. In addition, we also provided an overview of other application scenarios such as healthcare, finance, disaster management, public security, and daily life. The analyzed literature provides several datasets that we categorized according to the application scenarios they can be used for. For each dataset, we reported its description, the papers containing experiments over it, and, when available, a source web link.

Publications

ACM Computing Surveys – Under review

Links to Tangible results

https://github.com/Angielica/temporal_processes: A list of Point Processes resources.
https://github.com/Angielica/datasets_point_processes: A list of relevant datasets.

The goal of the project is to investigate the role of social norms on misinformation in online communities. This knowledge can help identify new interventions in online communities that help prevent the spread of misinformation. To accomplish the task, the role of norms will be explored by analyzing Twitter data gathered through the Covid19 Infodemics Observatory, an online platform developed to study the relationship between the evolution of the COVID-19 epidemic and the information dynamics on social media. This study can inform a further set of microprojects addressing norms in AI systems through theoretical modelling and social simulations.

Output

Diagnosis and visualization map of existing social norms underlying fake news related to COVID19

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), ISTC: Eugenia Polizzi)
  • Fondazione Bruno Kessler (FBK), Marco Pistore

Primary Contact: Eugenia Polizzi, CNR-ISTC

Main results of micro project:

Through the analysis of millions of geolocated tweets collected during the Covid-19 pandemic we were able to identify the existence of structural and functional network features supporting an “illusion of the majority" on Twitter. Our results suggest that the majority of fake (and other) contents related to the pandemic are produced by a minority of users and that there is a structural segmentation in a small “core” of very active users responsible for large amount of fake news and a larger "periphery" that mainly retweets the contents produced by the core. This discrepancy between the size and identity of users involved in the production and diffusion of fake news suggests that a distorted perception of what users believe is the majority opinion may pressure users (especially those in the periphery) to comply with the group norm and further contribute to the spread of misinformation in the network.

Contribution to the objectives of HumaneAI-net WPs

Top-down “debunking” interventions have been applied to limit the spread of fake news, but so far with limited power. Recognizing the role of social norms in the context of misinformation fight may offer a novel approach to solve such a challenge, shifting to bottom-up solutions that help people to correct misperceptions about how widely certain opinions are truly held. The results of this microproject can inform new strategies to improve the quality of debates in online communities and counteract polarization in online communities (WP4). These results can be also relevant for WP2 (T 2.4), e.g., by giving insights about how human interactions can influence and are influenced by AI technology, WP3 (T 3.3) by offering tools to study the reactions of humans within hybrid human-AI systems and WP5 (T 5.4) by evaluating the role of social norms dynamics for a responsible development of AI technology.

Tangible outputs

  • Publication: The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing – Piergiorgio Castioni, Giulia Andrighetto, Riccardo Gallotti, Eugenia Polizzi, Manlio De Domenico
    https://arxiv.org/abs/2112.01304

Study of emergent collective phenomena at metropolitan level in personal navigation assistance systems with different recommendation policies, with respect to different collective optimization criteria (fluidity of traffic, safety risks, environmental sustainability, urban segregation, response to emergencies, …).

Idea: (1) start from real big mobility data (massive datasets of GPS trajectories at metropolitan level from onboard black-boxes, recorded for insurance purposes), (2) identify major road blocks events (accidents, extraordinary events, …) in data, (3) simulate the effect (modify the data) that users involved in a road block were previously supported by navigation systems the employ policies to mitigate the impact of the block, by using different policies different from individual optimization, aiming at collective optimization (aiming at diversity, randomization, safety, resilience, etc.)

Compare the impact of the different choices in term of aggregated impact.

Output

(Big-) data-driven simulations with scenario assessment

Scientific paper

Presentations

Project Partners:

  • Università di Pisa (UNIPI), Dino Pedreschi
  • Consiglio Nazionale delle Ricerche (CNR), Mirco Nanni
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowicz

Primary Contact: Mirco Nanni, CNR

Social dilemmas are situations in which the interests of the individuals conflict with those of the team, and in which maximum benefit can be achieved if enough individuals adopt prosocial behavior (i.e. focus on the team’s benefit at their own expense). In a human-agent team, the adoption of prosocial behavior is influenced by various features displayed by the artificial agent, such as transparency, or small talk. One feature still unstudied is expository communication, meaning communication performed with the intent of providing factual information without favoring any party.

We will implement a public goods game with information asymmetry (i.e. agents in the game do not have the same information about the environment) and perform a user-study in which we will manipulate the amount of information that the artificial agent provides to the team, and examine how varying levels of information increase or decrease human prosocial behavior.

Output

Submission to one of the following: International Journal of Social Robotics, Behaviour & Information Technology, AAMAS, or CHI. Submission to be sent by the end of August 2021.

Release of the game developed for the study on the AI4EU platform to allow other researchers to use it and extend it

Educational component on the Ethical aspect of AI, giving a concrete example on how AI can “manipulate” a human

Presentations

Project Partners:

  • Örebro University (ORU), Jennifer Renoux
  • Instituto Superior Técnico (IST), Ana Paiva

Primary Contact: Jennifer Renoux, Örebro University

Main results of micro project:

This micro-project has led to the design and development of an experimental platform to test how communication from an artificial agent influences a human's pro-social behavior.
The platform comprises the following components:

– a fully configurable mixed-motive public good game, allowing a single human player to play with artificial agents, and an artificial "coach" giving feedback on the human's action. Configuration is made through json files (number and types of agents, type of feedback, game configuration…)

– a set of questionnaires designed to evaluate the prosocial behavior of the human player during a game

Contribution to the objectives of HumaneAI-net WPs

This project contributes to WP3 and WP4.
The study carried during the micro-project will give insight on how an artificial agent may influence a human's behavior in a social-dilemna context, thus allowing for informed design and development of such artificial agent.
In addition, the platform developed will be made available publicly, allowing future researchers to experiment on other configurations and other types of feedback. By using a well-development and consistent platform, the results of different studies will be more easily comparable.

Tangible outputs

  • Program/code: The Pest Control Game experimental platform – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    https://github.com/jrenoux/humane-ai-sdia.git
  • Publication: nternational Journal of Social Robotics or Behaviour & Information Technology – Jennifer Renoux*, Joana Campos, Filipa Correia, Lucas Morillo, Neziha Akalin, Ana Paiva
    In preparation

Creation of stories and narrative from data of Cultural Heritage

in this activity we want by tackle the fundamental dishomogeneity of thecultural heritage data is by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligence

Output

Publications about maps of Social Interactions across ages

Publication about AI algorithm for the automatic classification of documents

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  • Università di Pisa (UNIPI), Dino Pedreschi
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowitz

Primary Contact: Guido Caldarelli, CNR

Recent polarisation of opinions in society has triggered a lot of research into the mechanisms involved. Personalised recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias. In a recent work [1] we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment. In this project we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

[1] Sîrbu, A., Pedreschi, D., Giannotti, F. and Kertész, J., 2019. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3), p.e0213246.

Output

A paper on opinion dynamics in a complex systems or interdisciplinary journal.

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Giulio Rossetti
  • Central European University (CEU), Janos Kertesz
  • Università di Pisa (UNIPI), Alina Sirbu

Primary Contact: Giulio Rossetti, Consiglio Nazionale delle Ricerche, Pisa, Italy

Main results of micro project:

The project has run for less than 50% of its allocated time (it started on the 1st of July and will run for 4 months).

So far the algorithmic bias model has been extended to integrate media effects and preliminary correctness tests have been performed.
Moreover, the experimental settings have been fixed and a first preliminary analysis of initial results performed.

Contribution to the objectives of HumaneAI-net WPs

The recent polarization of opinions in society has triggered a lot of research into the mechanisms involved. Personalized recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation, through a mechanism known as algorithmic bias.

In recent work we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment.

In this project, we plan to enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction will be modelled as external fields that affect the population of individuals. Furthermore, we will study whether moderate media can be effective in counteracting polarisation.

Tangible outputs

Attachments

RPReplay-Final1634060242_Berlin.mov

The project aims at investigating systems composed by a large number of agents belonging to either human or artificial type. The plan is to study, both from the static and the dynamical point of view, how such a two-populated system reacts to changes in the parameters especially in view of possible abrupt transitions. We are planning to pay special attention to higher order interactions like three body effects (H-H-H, H-H-AI, H-AI-AI and AI-AI-AI). We hypothesize that such interactions are crucial for the understanding of complex Human-AI systems. We will analyze the static properties both from the direct and inverse problem perspective. This study will pave the way for further investigation of the system in its dynamic evolution by means of correlations and temporal motifs.

Output

1 paper in a complex systems (or physics or math) journal

Project Partners:

  • Università di Bologna (UNIBO), Pierluigi Contucci
  • Central European University (CEU), Janos Kertesz

Primary Contact: Pierluigi Contucci, University of Bologna

Attachments

Contucci MP-UNIBO-CEU_March17.mov

We envision a human-AI ecosystem in which AI-enabled devices act as proxies of humans and try to learn collectively a model in a decentralized way. Each device will learn a local model that needs to be combined with the models learned by the other nodes, in order to improve both the local and global knowledge. The challenge of doing so in a fully-decentralized AI system entails understanding how to compose models coming from heterogeneous sources and, in case of potentially untrustworthy nodes, decide who can be trusted and why. In this micro-project, we focus on the specific scenario of model “gossiping” for accomplishing a decentralized learning task and we study what models emerge from the combination of local models, where combination takes into account the social relationships between the humans associated with the AI. We will use synthetic graphs to represent social relationships, and large-scale simulation for performance evaluation.

Output

Paper (most likely at conference/workshop, possibly journal)

Simulator (fallback plan if a paper cannot be produced at the end of the micro-project)

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Andrea Passarella
  • Central European University (CEU), Gerardo Iniguez

Primary Contact: Andrea Passarella, CNR-IIT

Main results of micro project:

As of now, the micro project has developed a modular simulation framework to test decentralised machine learning algorithms on top of large-scale complex social networks. The framework is written in Python, exploiting state-of-the-art libraries such as networkx (to generate network models) and Pytorch (to implement ML models). The simulator is modular, as it accepts networks in the form of datasets as well as synthetic models. Local data are allocated on each node, which trains a local ML model of choice. Communication rounds are implemented, through which local models are aggregated and re-trained based on local data. Benchmarks are included, namely federated learning and centralised learning. Initial simulation results have been derived, to assess the accuracy of decentralised learning (social AI gossiping) on Barabasi-Albert networks, showing that social AI gossiping is able to achieve comparable accuracy with respect to centralised and federated learning versions (which rely on centralised elements, though).

Contribution to the objectives of HumaneAI-net WPs

The simulation engine is a modular one, that can be exploited (also by the other project partners) to test decentralised ML solutions. The weighted network used to connect nodes can represent social relationships between users, and thus one of the main objectives of the obtained results it to understand the social network effects on decentralised ML tasks.

Tangible outputs

Attachments

MP_social_AI_gossiping_CNR_CEU_presentation.pptx_Berlin.pps