The purpose of this micro-project is to critically reflect on the design of an AI system by investigating the role of the designer. Designers make choices during the design of the system. Analysing these choices and their effective consequences contributes to an overall understanding of the situated knowledge embedded in a system. The reflection is concerned with questions like what does it mean for the output of the system what the designer’s interpretations are? In what way do they then exercise power on this system? In particular, this micro-project will examine a concrete case. It will follow the design of an agent-based social simulation that aims at modelling how inequality affects democracy.

Output

An agent-based simulation on how wealth inequality affects political relations

A conference paper critically reflecting on the design of the simulation

Presentations

Project Partners:

  • TU Delft, Jonne Maas
  • Umeå University (UMU), Virginia Dignum

Primary Contact: Jonne Maas, Delft University of Technology

Main results of micro project:

The main results will be a conference paper that illustrates how a designer of an AI system influences the final outcome of the system and a model that simulates how inequality affects democratic policy-making.

The micro project is hence a collaboration between a project rooted in philosophical thinking, highlighting the social dimension of design, and the technical design of an agent-based simulation.

Contribution to the objectives of HumaneAI-net WPs

WP5 is concerned with the responsible development of AI systems. This MP analyzes the role of a designer, and by doing so sheds light on what is necessary for responsible AI development. In order to develop responsible AI, it is essential to understand the underlying power dynamics behind an AI's ecosystem. For this, we need to understand how and why a designer has a special role in the development of a system.

Tangible outputs

  • Publication: The Role of a Designer – Jonne Maas
    TBA
  • Other: Inequality and Democracy – Luis Gustavo Ludescher
    TBA

HumanE-AI research needs data to advance. Often, researcher struggle to progress for the lack of data. At the same time, collecting a rich and accurate dataset is no easy task. Therefore, we propose to share through the AI4EU platform the datasets already collected so far by different research groups. The datasets will be curated to be ready-to-use for researchers.
Possible extension and variation of such datasets will also be generated using artificial techniques and published on the platform.
A performance baseline will be provided for each dataset, in form of publication reference, developed model or written documentation.
The relevant legal framework will be investigated with specific attention to privacy and data protection in relation to the use and extension of existing datasets as well as future data collection on the subject of multimodal data collection for perception modelling. The microproject will serve as a case study to highlight challenges and opportunities in the development of legal protection by design in data curation for machine learning.

Output

Publication of OPPORTUNITY dataset (and other datasets if time available) on the AI4EU platform. [lead: UoS, contributor: DFKI]

Publication of baseline performance pipeline for OPPORTUNITY dataset (and other datasets if time available) on AI4EU platform. [lead: UoS, contributor: DFKI]

Investigation of data loader and pipeline integration on AI4EU experiment to load HAR dataset and pre-existent pipelines, with a focus on the opportunity dataset (and other datasets if time available) [lead: UoS, contributor: DFKI]

Generation of variation [lead: DFKI]

Survey publications describing datasets and performance baseline [lead: DFKI, contributor: UoS]

Presentations

Project Partners:

  • University of Sussex (UOS), Mathias Ciliberto
  • German Research Centre for Artificial Intelligence (DFKI), Vitor Fortes Rey
  • Vrije Universiteit Brussel (VUB), Arno de Bois

Primary Contact: Mathias Ciliberto, University of Sussex

Main results of micro project:

Collection, curation and publication of 4 datasets for Multi Modal Perception and Modeling (WP2):
– OPPORTUNITY++:
– activity of daily living
– sensor rich
– New additional anonymised, annotated video with OpenPose tracks
– Capacitive Gym:
– 7 popular gym workouts
– 11 subjects, each with separate 5 days
– Capacitive sensors in 3 position
– New dataset
– HCI FreeHand dataset:
– Freehand synthetic gestures
– Multiple 3D accelerometers
– SkodaMini dataset:
– Car manufacturing gestures
– Multiple 3D accelerometer and gyroscope
– Beach volleyball (https://ieee-dataport.org/open-access/wearlab-beach-volleyball-serves-and-games)

Contribution to the objectives of HumaneAI-net WPs

Multi-modal perception and modeling needs data to progress, but recording a new rich and accurate dataset allowing for comparative evaluations by the scientific community is no easy task. Therefore, we gathered rich datasets for multimodal perception and modelling of human activities and gestures. We curated the dataset in order to make them easy to use for research thanks to clear documentation and file formats.
The highlight of this microproject is the OPPORTUNITY++ dataset of activities of daily living, a multi-modal extension of the well-established OPPORTUNITY dataset. We enhanced this dataset which contains wearable sensor data, with previously unreleased data, including video and motion tracking data, which make OPPORTUNITY++ a truly multi-modal dataset with wider appeal, such as to the computer vision community.
In addition, we released other well established activity datasets (HCI FreeHand and SkodaMini dataset) as well as datasets involving novel sensor modalities (CapacitiveGym) and skill-assessment dataset (Wearlab BeachVolleyball)

Tangible outputs

Nowadays ML models are used in decision-making processes in real-world problems, by learning a function that maps the observed features with the decision outcomes. However these models usually do not convey causal information about the association in observational data, thus not being easily understandable for the average user, therefore not being possible to retrace the models’ steps, nor rely on its reasoning. Hence, it is natural to investigate more explainable methodologies, such as causal discovery approaches, since they apply processes that mimic human reasoning. For this reason, we propose the usage of such methodologies to create more explicable models that replicate human thinking, and that are easier for the average user to understand. More specifically, we suggest its application in methods such as decision trees and random forest, since by themselves are highly explainable correlation-based methods.
na

Output

1 Conference Paper

1 Prototype

Dataset Repository

Project Partners:

  • INESC TEC, Joao Gama
  • Università di Pisa (UNIPI), Dino Pedreschi
  • Consiglio Nazionale delle Ricerche (CNR), Fosca Giannotti

Primary Contact: Joao Gama, INESC TEC, University of Porto

Main results of micro project:

1) Journal paper submitted to WiRES – data mining and knowledge discovery:
Methods and Tools for Causal Discovery and Causal Inference
Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, João Gama
(under evaluation)

2) Github repository of datasets, software, and papers related to causal discovery and causal inference research

https://github.com/AnaRitaNogueira/Methods-and-Tools-for-Causal-Discovery-and-Causal-Inference

Contribution to the objectives of HumaneAI-net WPs

The HumanE-AI project thinks a society of increasing interactions between humans and artificial agents. All around the project, causal models are relevant for plausible models of human behavior, man-machine explanations, and upgrading machine-learning algorithms with causal-inference mechanisms.

The output of the micro-project presents a deep study about causal discovery and causal inference. Moreover, the github repository of datasets, papers, and code will be an excellent source of resources for those want to study the topic.

Tangible outputs

This proposal wants to conduct an empirical research that explores the social and public attitudes of individuals towards AI and robots.

AI and robots will enter many more aspects of our daily life than the average citizen is aware of while they are already organizing specific domains such as work, health, security, politics and manufacturing. Along with technological research it is fundamental to grasp and gauge the social implications of these processes and their acceptance into a wider audience.

Some of the research questions are:

Do citizens have a positive or negative attitude about the impact of Ai?

Will they really trust a driverless car or will they passively accept a loan or insurance’s denial based on an algorithmic decision? Do states alone have the right and expertise to regulate the emerging technology and digital infrastructures? What about technology governance?

What are the dominant AI’s narratives in the general public?

Output

Two scientific papers in journals like (depending on peer reviewing guidelines): -Ai & Society (https://www.springer.com/journal/146); -Journal of Artificial intelligence research https://www.jair.org/index.php/jair -Big Data and Society https://journals.sagepub.com/home/bds – also other potential journals will be considered (Social science computer review, Public understanding of science)

Public presentation at scientific (eg. HICSS 2022, AAAI) and more general interest conferences in the second half of 2021 and first half 2022.

Presentations

Project Partners:

  • Università di Bologna (UNIBO), Laura Sartori
  • Umeå University (UMU), Andreas Theodorou
  • Consiglio Nazionale delle Ricerche (CNR), Fosca Giannoti

Primary Contact: Laura Sartori, UNIBO – Dept of Political and Social sciences

Main results of micro project:

The Bologna survey collected around 6000 questionnaires. Data analysis on the Bologna case study revelead a quite articulated picture where variables such as gender, generation and competence resulted crucial in the different understaning and knowledge about AI.
AI Narratives sensibly vary across social groups, underlying a different degree of awareness and social acceptance.
The UMEA and CNR surveys had more problems in the collection phase, while the implementation and launch of the surveys were smooth and ontime. In September there will be second round, hoping in a more fruitful data collection.

2 papers submitted to international journals.

Sartori and Theodorou (under review) A sociotechnical perspective for the future of AI: narratives, inequalities, and human control, in Ethics and Information technology.

Sartori and Bocca (under review), Minding the gap(s): public perceptions of AI and socio-technical imaginaries, in AI&Society.

Contribution to the objectives of HumaneAI-net WPs

The microproject addresses issues related to social trust, cohesion, and public perception. It has clarified how and to what degree Ai is accepted by the general public and highlighted the different level of public acceptance of AI across social groups.
Goals of T4.3 are met since the Sartori and Bocca article highlighted how perception and narratives differ by the main sociodemographics variables. Notable is the (expected) gender effect: women do have less knowledge and do trust less AI systems. Especially when it come to depicting future and distopic scenarios, women tend to fear technologies more than men.

Goals of T5.3 are met by the work presented in Sartori and Theodorou.
Focusessing on the main challenges associated with AI as autonomous systems spread within society, the article points to biases and unfariness as being among the major challenges to be addressed in a sociotechnical perspective.

Tangible outputs

  • Publication: A sociotechnical perspective for the future of AI: narratives, inequalities, and human control, in Ethics and Information technology. – Sartori, Laura
    Theodorou, A.
    Accepted in Ethics and Information Technology https://www.springer.com/journal/10676
  • Publication: Minding the gap(s): public perceptions of AI and socio-technical imaginaries – Sartori, Laura
    Bocca, Giulia
    Submitted to AI&Society, https://www.springer.com/journal/146

Attachments

UNIBO_sartori_What idea of AI_141021_Berlin.pptx

In this project we will investigate whether normative behavior can be detected in facebook groups. In a first step we will hypothesize about possible norms that could lead to a group becoming more extreme on social media, or whether groups that become more extreme will develop certain norms that distinguish them from other groups and that could be detected. An example of such a norm could be that a (self-proclaimed) leader of a group is massively supported by retweets, likes or affirmative messages, along with evidences of verbal sanctioning toward counter-normative replies. Simulations and analyses of historical facebook data (using manual detection in specific case studies and more broadly through NLP) will help revealing the existence of normative behavior and its potential change over time.

Output

Report describing guidelines to detect normative behavior on social media platforms

Presentations

Project Partners:

  • Umeå University (UMU), Frank Dignum
  • Consiglio Nazionale delle Ricerche (CNR), Eugenia Polizzi

Primary Contact: Frank Dignum, Umeå University

Main results of micro project:

The project delivered detailed analyses of the tweets around the USA elections and subsequent riots. Where we thought we might discover some patterns in the tweets indicating more extreme behavior, it appears that extremist expressions are quickly banned from Twitter and find a home in more niche social platforms (in this case Parler). Thus the main conclusion of this project is that we need to find the connections between users in different social media platforms in order to track any extreme behavior.

Contribution to the objectives of HumaneAI-net WPs

In order to see how individuals might contribute to behavior that is not in the interest of society we cannot analyze one social media platform. Especially more extremist expressions quickly disappear from main stream social media to niche platforms that can quickly change over time. Thus the connection between individual and societal goals is difficult to observe by just analyzing data from a single social media platform. In the other hand it is very difficult to link users between platforms.

Tangible outputs

  • Other: Identification of radical behavior in Parler groups – Frank Dignum
  • Other: Characterizing the language use of radicalized communities detected on Parler – Frank Dignum

in this activity we want by tackle the fundamental dishomogeneity of thecultural heritage data is by structuring the knowledge available from user experience and methods of machine learning. The overall objective of this microproject is to design new methodologies to extract and produce new information, as well as to propose scholars and practitioners new and even unexpected and surprising connections and knowledge and make new sense of cultural heritage by connecting and creating sense and narratives with methods based on network theory and artificial intelligence.

Output

Database usable from the people in the Consortium as a pilot case

Paper on the topic

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Guido Caldarelli
  • Consiglio Nazionale delle Ricerche (CNR), Antonio Scala
  • Consiglio Nazionale delle Ricerche (CNR), Emilia La Nave

Primary Contact: Guido Caldarelli, CNR/ISC

Building AI machines capable of making decisions compliant with ethical principles is a challenge that needs to be faced in the direction of improving reliability and fairness in AI.

This micro-project aims at combining argument mining and argumentation-based reasoning to ensure ethical behaviors in the context of chatbot systems. Argumentation is a powerful tool for modeling conversations and disputes. Argument mining is the automatic extraction of arguments from natural language inputs, which could be applied both in the analysis of user input and in the retrieval of suitable feedbacks to the user. We aim to augment classical argumentation frameworks with ethical and/or moral constraints and with natural language interaction capabilities, in order to guide the conversation between chatbots and humans in accordance with the ethical constraints

Output

conference paper

Presentations

Project Partners:

  • Consiglio Nazionale delle Ricerche (CNR), Bettina Fazzinga
  • Università di Bologna (UNIBO), Paolo Torroni

Primary Contact: Bettina Fazzinga, CNR

Main results of micro project:

We propose a general-purpose dialogue system architecture that leverages computational argumentation and state-of-the-art language technologies to implement ethics by design.

In particular, we propose a chatbot architecture that relies on transparent and verifiable methods and is conceived so as to respect relevant data protection regulations. Importantly, the chatbot is able to explain its outputs or recommendations in a manner adapted to the intended (human) user.

We evaluate our proposal against a covid-19 vaccine case study.

Contribution to the objectives of HumaneAI-net WPs

In the context of information-providing chatbots and assistive dialogue systems, especially in the public sector, ethics by design requires trustworthiness, transparency, explainability, correctness, and it requires architectural choices that take data access into account from the very beginning.

The main features of our chatbot architecture, with respect to the objectives of HumaneAI-net WP5, are
– an architecture for AI dialogue systems where user interaction is carried out in natural language, not only for providing information to the user, but also to answer user queries about the reasons leading to the system output (explainability).
– a transparent reasoning module, built on top of a computational argumentation framework with a rigorous, verifiable semantics (transparency, auditability).
– a modular architecture, which enables an important decoupling between the natural language interface, where user data is processed, and the reasoning module, where expert knowledge is used to generate outputs (privacy and data governance).

Tangible outputs