Contact person: Tony Zhang (zhang@fortiss.org)

Internal Partners:

  1. Fortiss, Tony Zhang, zhang@fortiss.org
  2. TUBITAK, Sencer Melih Deniz, sencer.deniz@tubitak.gov.tr

 

For AI to be useful in high-stakes applications, humans need to be able to recognize when they can trust AI and when not. However, in many cases, humans tend to over rely on AI, i.e. people adopt AI recommendations even when they are not adequate. Explanations of AI output are meant to help users calibrate their trust in AI to appropriate levels, but often even increase overreliance. In this microproject, we aim to investigate whether and how overreliance on AI and the effect of explanations depend on the perceived task difficulty. We ran an AI-assisted decision-making experiment where participants are asked to solve a series of decision tasks (particular type of task to be determined) with support by an AI model, both with and without explanations. Along with participants’ reliance behavior, we will measure perceived task difficulty for each task and participant through EEG data.

Results Summary

We found that people are able to rely appropriately on AI recommendations in our decision-making task when the decision is easy for people, resulting in complementary team performance. However, when the decision is difficult for people, they heavily over rely on AI recommendations, leading to non-complimentary team performance.

Common feature-based explanations had no statistically significant effect on the appropriateness of people’s reliance, neither for easy nor for difficult decisions. However, we did find hints that, conceptually, feature-based explanations could further improve the appropriateness of reliance for easy decisions, but not for hard decisions.

Tangible Outcomes

  1. [under review]  a submission to the journal Behaviour & Information Technology (BIT), Special Issue on Human-Centered Artificial Intelligence with only final minor revisions pending
  2. Dataset of EEG recordings corresponding to easy and difficult decisions:  https://www.ai4europe.eu/research/ai-catalog/decision-difficulty-eeg-dataset

Contact person: Lorenzo Valerio (lorenzo.valerio@iit.cnr.it

Internal Partners:

  1. Consiglio Nazionale delle Ricerche (CNR), Lorenzo Valerio, lorenzo.valerio@iit.cnr.it
  2. Central European University, János Kertész, kerteszj@ceu.edu

 

This microproject set out to study the effect of simple social structures on the decentralized learning process in a human-AI ecosystem and how the lack of coordination impacts the resulting learned model. The project considered the following learning policies: federated learning (FedAvg), average-based decentralized learning (called DecAvg, i.e., an adaptation of FedAvg to the decentralised settings), difference-based decentralized learning (with a novel strategy called DecDiff), and KD-based decentralized learning (with a virtual teacher). For decentralized strategies, we considered both homogeneous and heterogeneous initial conditions (e.g., common initialization of models, IID and non-IID data distribution among nodes). The common benchmark is centralized learning (i.e., we assume that all users upload their data to a central server). From the social network standpoint, we initially focused on dyadic and triadic social networks, then moved on to richer topologies like erdős-rényi graphs and SBM graphs. As a learning task, we considered a standard classification problem on the MNIST dataset. Other and more challenging datasets are currently under investigation.

Results Summary

We have observed the following:

  1. In small networks where data availability is not an issue, DecAvg in model-homogeneous settings (i.e., all the users’ AI models are commonly initialised) is as good as federated learning using FedAvg (despite the lack of a central controller). Without the common initialization (i.e., all the AI models are independently and randomly initialised), the accuracy strongly depends on the initial conditions. DecDiff definitely mitigates the problem, yielding a higher accuracy despite being slower in the transient phase. The virtual teacher clearly outperforms a basic centralized approach. Instead, when data is a bottleneck, the learning strategy plays a limited role in the observed performance.
  2. In larger networks, DecDiff doesn’t suffer from the initial disruption caused by the averaging process that DecAvg suffers from. However, at steady state, it is not better than the simpler DecAvg. When the data distribution is extremely uneven, DecDiff seems to provide more reliable performance. Interestingly, KD-based decentralized learning always performs well, surpassing standard federated learning.

Tangible Outcomes

  1. [arxiv] “Coordination-free Decentralised Federated Learning on Complex Networks: Overcoming Heterogeneity” Lorenzo Valerio, Chiara Boldrini, Andrea Passarella, János Kertész, Márton Karsai, Gerardo Iñiguez, https://arxiv.org/abs/2312.04504 
  2. We implemented all the strategies in the SAI Simulator (SAISim). The repository is on Zenodo: https://zenodo.org/records/5780042#.Ybi2sX3MLPw 

Contact person: Andras Lörincz (lorincz@inf.elte.hu

Internal Partners:

  1. ELTE, Andras Lörincz  

External Partners:

  1. Siemens, Sonja Zillner, sonja.zillner@siemens.com
  2. Volkswagen, Patrick van der Smagt, smagt@argmax.ai

 

The goals of the workshop are as follows. We want to: (a) to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations and (b) to start new micro-projects along the lines determined by the workshop as soon as possible.

In the 2003 NSF call (NSF 03-611) that was ultimately canceled, it realized that “The future and well-being of the Nation depend on the effective integration of Information Technologies (IT) into its various enterprises and social fabric.” The call also stated, “We have great difficulty predicting or even clearly assessing social and economic implications and we have limited understanding of the processes by which these transformations occur.”

These arguments are even stronger in relation to AI as the transition is occurring rapidly, forecasting is prone to significant uncertainty and, even if they fail to a large extent, the impact will be serious and, beyond the regulatory (societal) policy, there will be a huge demand for assisting people and limiting societal turbulences.

Regulated AI

Regulation of public use of AI might succeed. Present efforts, such as “MEPs push to bring chatbots like ChatGPT in line with EU’s fundamental rights”, have, however, several shortcomings. Consider the easy-to-retrain Alpaca, a “A Strong, Replicable Instruction-Following Model” and similar open-source efforts that will follow and offer effective and inexpensive tools for “rule breakers”.

Rule breakers can use peer-to-peer BitTorrent methods for hiding the origin of the source, create artificial identities, enter social networks, find echo chambers, and spread fake information efficiently. Automation of misinformation (trolling, conspiracy theory, improved tools for influencing) and the social networks of different kinds cast doubt on the effectiveness of regulation efforts. While regulations seem necessary, still, as regulations are control means, delay in the control may cause instabilities.

Assistive AI

Regulations for a community-serving “Assistive AI” (AAI), however, can be developed. AAI could diminish the harmful effects provided that efficient verification methods are available. Our early method [1] is a promising starting point as it preserves anonymity for contributing participants who need it or would like it to stay anonymous and can stay anonymous as long as the rules of the community and the law allow it. Accountability can also be included to make contributors responsible.

Results Summary

Regulatory frameworks for the use of AI are emerging. However, they trail behind the fast-evolving malicious AI technologies that can quickly cause lasting societal damage. In response, we introduce a pioneering Assistive AI framework designed to enhance human decision-making capabilities. This framework aims to establish a trust network across various fields, especially within legal contexts, serving as a proactive complement to ongoing regulatory efforts. Central to our framework are the principles of privacy, accountability, and credibility. In our methodology, the foundation of reliability of information and information sources is built upon the ability to uphold accountability, enhance security, and protect privacy. This approach supports, filters, and potentially guides communication, thereby empowering individuals and communities to make well-informed decisions based on cutting-edge advancements in AI. Our framework uses the concept of Boards as proxies to collectively ensure that AI-assisted decisions are reliable, accountable, and in alignment with societal values and legal standards. Through a detailed exploration of our framework, including its main components, operations, and sample use cases, we show how AI can assist in the complex process of decision-making while maintaining human oversight. The proposed framework not only extends regulatory landscapes but also highlights the synergy between AI technology and human judgement, underscoring the potential of AI to serve as a vital instrument in discerning reality from fiction and thus enhancing the decision-making process. Furthermore, we provide domain-specific use cases to highlight the applicability of our framework.

Tangible Outcomes

  1. [arxiv] “Assistive AI for augmenting human decision-making” Natabara Máté Gyöngyössy, Bernát Török, Csilla Farkas, Laura Lucaj, Attila Menyhárd, Krisztina Menyhárd-Balázs, András Simonyi, Patrick van der Smagt, Zsolt Ződi, András Lőrincz https://arxiv.org/abs/2410.14353 

Contact person: Giulio Rossetti (giulio.rossetti@isti.cnr.it

Internal Partners:

  1. Consiglio Nazionale delle Ricerche (CNR), Giulio Rossetti, giulio.rossetti@isti.cnr.it
  2. Università di Pisa UNIPI, Dino Pedreschi, dino.pedreschi@unipi.it
  3. Central European University (CEU), Janos Kertesz, kerteszj@ceu.edu

 

Recent polarisation of opinions in society has triggered a lot of research into the mechanisms involved. Personalised recommender systems embedded into social networks and online media have been hypothesized to contribute to polarisation through a mechanism known as algorithmic bias. In a recent work [1], we have introduced a model of opinion dynamics with algorithmic bias, where interaction is more frequent between similar individuals, simulating the online social network environment. In this project, we enhance this model by adding the biased interaction with media, in an effort to understand whether this facilitates polarisation. Media interaction are modelled as external fields that affect the population of individuals. Furthermore, we studied whether moderate media can be effective in counteracting polarisation.

Results Summary

In this micro project, we studied the effects of the combination of social influence and mass media influence on the dynamics of opinion evolution in a biased online environment, using a recent bounded confidence opinion dynamics model with algorithmic bias as a baseline, and adding the possibility to interact with one or more media outlets modeled as stubborn agents. We analyzed four different media landscapes and found that an open-minded population is more easily manipulated by external propaganda – moderate or extremist – while remaining undecided in a more balanced information environment. By reinforcing users’ biases, recommender systems appear to help avoid the complete manipulation of the population by external propaganda.

Tangible Outcomes

  1.  Pansanella, V., Sîrbu, A., Kertesz, J., & Rossetti, G. (2023). Mass media impact on opinion evolution in biased digital environments: a bounded confidence model. Scientific Reports, 13(1), 14600. https://www.nature.com/articles/s41598-023-39725-y.pdf 
  2. Model implementation https://github.com/ValentinaPansanella/AlgBiasMediaModel

Contact person:  Guido Caldarelli (Guido.Caldarelli@cnr.it)

Internal Partners:

  1. Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  2. Università di Pisa (UNIPI), Dino Pedreschi
  3. German Research Centre for Artificial Intelligence (DFKI), Paul Lukowicz

 

In this activity we want to tackle the fundamental dishomogeneity of the cultural heritage data by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligenceIn this activity we wanted to represent a snapshot of the social and political life in 1300 Venice, by taking the data form state archive of republic of Venice and by using computational and AI instrumented to represent them in form of a graph, finding communities. In order to achieve this activity we firstly need to have a sense of the archive structure and then we need to identify the documents worth of attention. Once those have been identified we need an automatic translation from calligraphy to ancient venetian or latin and then a transformation of this information into a file. On these computer readable data we then apply algorithms to recognise structure and communities, by connecting people, events and places.

Results Summary

By examining archival data from the Senato-Misti-Series of the Venice state archive (ASVe), we uncovered some clear quantitative trends and patterns that we com the pivotal role of immigration in urban evolution, providing insights into how diverse populations have built and sustained thriving communities. Understanding the historical significance of immigration helps us appreciate the complex interplay of factors that have shaped cities and informs contemporary discussions on urban development and multiculturalism. This exploration highlights the enduring legacy of immigration in creating resilient and dynamic urban environments, emphasizing the importance of inclusive policies and practices in addressing modern urban challenges. The project resulted also in a preprint that should be submitted in the upcoming period.

Tangible Outcomes

  1. we have a cleared and cured dataset for 70 years of Senate activities from year 1333 vld.techtree.it Tools that exploit the unique temporal continuity and consistency of the Senato-Deliberazioni-Misti archival series to represent the time evolution of queries and to construct dynamical chronologies.
  2. VLD Series Viewer vld.techtree.it 
  3. We presented these results to various meetings as the one on Max Planck Institute fur Kunst geschichte Biblioteca Hertziana Library at this link https://www.biblhertz.it/3395137/towards-a-collaborative-cultural-analysis-of-the-city-of-rome 
  4. Download the slides here https://www.dropbox.com/scl/fi/sai8cho2tk42c0q25ndqi/136-VLD.pptx?rlkey=55y1f2nymjoybf6s0hfxok0z5&dl=0 

Contact person: Pierluigi Contucci (pierluigi.contucci@unibo.it

Internal Partners:

  1. Università di Bologna UNIBO, Pierluigi Contucci, pierluigi.contucci@unibo.it
  2. Central European University (CEU), Janos Kertesz, kerteszj@ceu.edu

 

The project aims at investigating systems composed by a large number of agents belonging to either human or artificial types. The plan is to study, both from the static and the dynamical point of view, how such a two-populated system reacts to changes in the parameters, especially in view of possible abrupt transitions. We plan to pay special attention to higher order interactions like three body effects (H-H-H, H-H-AI, H-AI-AI and AI-AI-AI). We hypothesized that such interactions are crucial for the understanding of complex Human-AI systems. We analyzed the static properties both from the direct and inverse problem perspective. This study will pave the way for further investigation of the system in its dynamic evolution by means of correlations and temporal motifs.

Results Summary

The progressive advent of artificial intelligence machines may represent both an opportunity or a threat. In order to have an idea of what is coming we propose a model that simulates a Human-AI ecosystem. In particular, we consider systems where agents present biases, peer-to-peer interactions and three body interactions that are crucial, and describe two humans interacting with an artificial agent and two artificial intelligence agents interacting with a human. We focus our analysis by exploring how the relative fraction of artificial intelligence agents affect that ecosystem. We find evidence that, for suitable values of the interaction parameters, arbitrarily small changes in such percentage may trigger dramatic changes for the system that can be either in one of the two polarised states or in an undecided state.

Tangible Outcomes

  1. Human-AI ecosystem with abrupt changes as a function of the composition. Contucci P, Kertész J, Osabutey G (2022) PLOS ONE 17(5):e0267310 https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0267310 

Contact person: Laura Sartori, (l.sartori@unibo.it)

Internal Partners:

  1. Università degli studi di Bologna (UNIBO), Laura Sartori, l.sartori@unibo.it
  2. Umeå University (UMU)
  3. Consiglio Nazionale delle Ricerche (CNR)

 

We want to conduct empirical research that explores the social and public attitudes of individuals towards AI and robots. AI and robots will enter many more aspects of our daily life than the average citizen is aware of while they are already organizing specific domains such as work, health, security, politics and manufacturing. Along with technological research it is fundamental to grasp and gauge the social implications of these processes and their acceptance into a wider audience.

Some of the research questions are:

  1. Do citizens have a positive or negative attitude about the impact of Ai?
  2. Will they really trust a driverless car, or will they passively accept a loan or insurance’s denial based on an algorithmic decision? Do states alone have the right and expertise to regulate the emerging technology and digital infrastructures? What about technology governance?
  3. What are the dominant AI’s narratives in the general public?

Results Summary

The Bologna survey collected around 6000 questionnaires. Data analysis on the Bologna case study revealed a quite articulated picture where variables such as gender, generation and competence resulted crucial in the different understanding and knowledge about AI.

AI Narratives sensibly vary across social groups, underlying a different degree of awareness and social acceptance. The UMEA and CNR surveys had more problems in the collection phase, while the implementation and launch of the surveys were smooth and on time.

Tangible Outcomes

  1. “A sociotechnical perspective for the future of AI: narratives, inequalities, and human control, in Ethics and Information technology”. L. Sartori, Laura, A. Theodorou. Published in Ethics and Information Technology 24.1 (2022) https://link.springer.com/ https://link.springer.com/article/10.1007/s10676-022-09624-3
  2.  Sartori, Laura, and Giulia Bocca. “Minding the gap (s): public perceptions of AI and socio-technical imaginaries.” AI & society 38.2 (2023): 443-458. https://philpapers.org/rec/SARMTG 
  3. slides: https://www.humane-ai.eu/_micro-projects/mps/MP-17/UNIBO_sartori_What%20idea%20of%20AI_141021_Berlin.pptx