Contact person: Mārcis Pinnis (marcis.pinnis@tilde.lv)

Internal Partners:

  1. Tilde
  2. Charles University

 

Within the scope of the project, we have created evaluation and development data sets for speech translation for meetings (for English->Latvian, Latvian->English, and Lithuanian- >English) (http://hdl.handle.net/20.500.12574/74),. We specifically contributed speech translation evaluation and development data sets for English->Latvian (4 hours and 40 minutes), Latvian->English (4 hours and 52 minutes), and Lithuanian->English (3 hours and 31 minutes). Additionally, we created an automatic minuting test set for the AutoMin 2023 shared task on automatic creation of meeting summaries (“minutes”) for English and Czech (https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4692). We specifically created for Task A: 10 English + 10 Czech meeting transcripts with

manually created reference minutes and for Task D: manually created alignments between AutoMin 2021 system outputs and transcripts.We also contributed to the human evaluation of machine translation systems for the Seventh Conference on Machine Translation (WMT) for the English->Czech translation direction.The microproject produced data that will be beneficial for future developments of speech translation and automatic minuting systems. The data will allow us to better understand capabilities of these systems and also identify potential areas of improvement. The results will contribute towards developing robust and trustworthy AI systems. Furthermore, the project contributed towards mobilization of the research landscape by involving a European research institution (Charles University) and an industry partner (Tilde).

Results Summary

We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploited the gained information and improved the recognition performance of the deep network for the lower viewpoints and showed that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real time, so the approach enables interactive applications.

 

Tangible Outcomes

  1. evaluation and development data sets for speech translation for meetings (for English->Latvian, Latvian->English, and Lithuanian->English) (http://hdl.handle.net/20.500.12574/74 )
  2. ELITIR minuting cortpus: an automatic minuting test set for the AutoMin 2023 shared task on automatic creation of meeting summaries (“minutes”) for English and Czech (https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4692 )

Contact person: Francesco Spinnato (francesco.spinnato@sns.it, francesco.spinnato@isti.cnr.it)

Internal Partners:

  1. Università di Pisa
  2. ISTI-CNR Pisa
  3. Generali Italia

 

The increasing availability of real-time sequential data, combined with advanced AI decision-making systems, is transforming the mobility industry. Crash Data Recorders (CDRs) are increasingly being used in cars to monitor safety measures, establish human tolerance limits, and quantify vehicle status. These recorders are usually installed on the airbag control module, collecting data before and after a crash. Recently, with the use of powerful Machine Learning (ML) models, these devices have become a valuable source of data for both academic research and businesses, such as insurance companies, to monitor and improve customer service quality.

In this work, we collaborated with Generali Italia, Italy’s biggest insurance company and part of Assicurazioni Generali, one of the largest global insurance and asset management providers. Generali Italia is developing an automatic classification system to provide first aid to its customers. As part of their insurance products, Generali offers to install a CDR in their customers vehicles. This system monitors the vehicle during its use and, among other services, tracks speed and acceleration on the three car axes. This data is used to train a deep learning model that enables the AI system to alert a Generali operator of possible car crashes.

By examining the CDR data and model predictions, the operator can make an informed decision and only contact the customer if assistance is really necessary. Two weaknesses are currently present. First, the high sensitivity of the AI system might cause unnecessary and harassing calls. Second, the AI system is based on a deep learning model that is inherently not interpretable, i.e., it is a black-box. This lack of transparency could hinder the operators understanding of the model’s outcome, potentially leading to a lack of trust, especially if it produces incorrect classifications. Moreover, the opaque nature of a deep learning model makes it difficult to improve the model’s technical performance once a certain plateau is reached (in the specific case under consideration, the reduction of false positives). In such a critical scenario, eXplainable Artificial Intelligence (XAI) is essential for interpreting these black-box predictions to ensure reliability in decision-making. XAI for time series classification is a rapidly emerging field, which presents many challenges due to the nature of time series data, which can be large, multivariate, highly imbalanced, and irregular. These characteristics, as seen in the datasets used by Generali, often cause off-the-shelf XAI approaches to fail due to the limitations of their implementation.

In this work we tackle the challenge of explainability in car crash prediction from different angles, utilizing real-world time series datasets for two distinct tasks:

1) standard time series classification and

2) classification of highly imbalanced time series, which is more akin to anomaly detection.

For the former, we combine existing post-hoc and ante-hoc XAI approaches in a pipeline that provides insights into the logic behind the black-box model used by Generali, enabling the construction of a more transparent predictive model. For the latter, we introduce Multivariate Asynchronous Shapelets, an interpretable-by-design approach based on multivariate shapelets, specifically developed to challenge state-of-the-art classifiers and anomaly detection algorithms, as well as the black-box model currently used by Generali.

Results Summary

A pipeline combining post-hoc and ante-hoc XAI for standard time series classification and the introduction of Multivariate Asynchronous Shapelets, an interpretable method developed to surpass state-of-the-art classifiers and Generali’s black-box model. The results are published.

In addition to the scientific contribution on XAI, it is important to highlight how the application of AI systems to automate the remote detection of potential car accidents by an insurance company has a positive impact on road safety, improving rescue operations and helping to reduce the potential impacts of an accident on the health of the insured.

Tangible Outcomes

  1. M. Bianchi, F. Spinnato, R. Guidotti, D. Maccagnola, A. Bencini Farina. “Multivariate Asynchronous Shapelets for Imbalanced Car Crash Predictions”. In: Proceedings of the 27th International Conference on Discovery Science (DS 2024). Accepted for publication. 2024 but proceedings are not published yet

Contact person: Richard Benjamins (richard.benjamins@telefonica.com

Internal Partners:

  1. Telefónica Investigación y desarrollo S.A. (TID), Richard Benjamins
  2. Volkswagen AG, Richard Niestroj
  3. Università di Bologna (UNIBO), Laura Sartori
  4. Consiglio Nazionale delle Ricerche (CNR), Fosca Giannotti  

External Partners:

  1. City Council, Valladolid, Pedro de Alarcon, pedroantoniode.alarconsanchez@telefonica.com

 

Globally, nine out of ten people breathe polluted air, and it is the direct death cause of more than seven million people per year. Between 20%-40% of deaths due to serious diseases are caused by air pollution (source: https://www.stateofglobalair.org/sites/default/files/documents/2020-10/soga-global-profile-factsheet.pdf). In Spain, 10.000 people die every year due to air pollution (almost tripling traffic deaths) and in Madrid alone, there are 5000 pollution deaths per year (14/day). Transportation by combustion engines is responsible for about 30% of air pollution, and in large cities this is higher. Urban areas and their respective local governments are facing immense challenges with accelerating rates of NO2, Ozone, Particle Matter and CO2 emissions amongst other pollutants. In their mission to ensure cleaner air for their cities, the first and most important step is to collect accurate and consistent data to ensure healthy air quality levels for citizens as well as to identify where the major air pollution hotspots are. Moreover, cities are increasingly looking at their transit systems to cut those emissions that impact public health and the environment.

Until now, monitoring the quality of air has involved great efforts for cities. For local governments, air quality management can be costly due to the required expensive equipment to monitor the key pollutants that worsen the quality of air. There are several sources of pollutants: industrial activities, construction, residential heating, among others, but road traffic of fossil combustion vehicles is the most prevalent source for dangerous pollutants such as NO2 and Ozone (O3). However, the way to investigate the actual traffic volumes is relatively manual, using roadside interview data and manual counters, although IoT sensors to quantify are increasingly deployed. Not only is this expensive, but often it is also inaccurate – providing a small snapshot on how traffic really moves around cities and countries. However, by using mobility data and IoT, the authorities can shift to Big Data and AI. Rather than using small samples, they can now receive insights more frequently, precise, and granular. That is an important complement to inform decisions with respect to air quality, as traffic along the weather conditions are closely correlated with air pollution levels.

European regulation requires cities to not exceed thresholds of pollutants. However, oftentimes measurements are taking place at the district level ignoring the fact that air quality might be different for every street. Moreover, air quality has not the same importance in a residential area versus a more industrial area. And the type of use is also important such as schools, hospitals, sports facilities, et cetera.

The combination of mobility data (generated from anonymized and aggregated mobile phone data of the telecommunications sector), IoT pollution and climate sensor data from moving vehicles, and Open Data, can provide actionable insights about traffic mobility patterns and pollution such that authorities and policymakers can better measure, predict and manage cities’ mobility and pollution. This micro project is strategically aligned with Europe’s Green Deal and the EU Data Strategy.

Results Summary

Artificial intelligence algorithms help in increasing the spatio-temporal accuracy of the monitoring activity and in providing predictions on future (dangerous) pollution levels, so authorities can take preventive actions. We have performed a series of innovation activities from the development of a prototype in one city (Madrid), which we subsequently validated in a second city (Valladolid) that also includes a social and ethical impact analysis to understand whether air quality related decisions are affecting social groups in an equal manner. The prototype we built, in collaboration with the city of Madrid, exploits both privately held data as well as publicly available (open) data to monitor air quality at street level. Data sources include traffic, vegetation, temperature/wind speed and demographics. The system allows cities to perform evidence-based policy- and decision-making. An important feature of the system built, is the collection of heterogeneous data, algorithms, advanced visualization, and filtering control in a single platform. This capacity is key to perform exploratory data analysis and to find insights.

This project uses industrial data from the telecommunications industry, combined with open data and IoT generated data to palliate an important societal problem, while at the same time showing a way in which the telecom sector can create value using artificial intelligence and data. It is aligned with the European data strategy, the Guidelines for Trustworthy AI, and the European Green Deal. This is the first of a series of three micro projects.

Tangible Outcomes

  1. Press release through the participating organizations’ websites raising awareness about the issue: https://unstats.un.org/unsd/undataforum/blog/7-ways-mobile-data-is-being-used-to-change-the-world/ 
  2. video explaining the project Air Quality for All (AQ4A) that could be used for government and business presentations https://www.youtube.com/watch?v=WBNf5F9Kp7c
  3. Source of the presentation slides https://www.humane-ai.eu/_micro-projects/mps/MP-23/MP-6.10-airquality_v2_Berlin.pptx

Contact person: Agnes Grünerbl (agnes.gruenerbl@dfki.de

Internal Partners:

  1. DFKI, Agnes Gruenerbl, Passant Elagroudy, and Paul Lukowicz  

External Partners:

  1. RPTU Landau, Thomas Lachmann and Jan Spilski
  2. Keio University, Giulia Barbareschi and Kai Kunze  

 

The main goal of the Humane AI Net project is to build up a network of AI research mainly within Europe. Nevertheless, since the recent UbiCHAI – experimental methodologies for cognitive human augmentation -Tutorial held at the Ubicomp Conference and co-sponsored by the HAI Net project, received great feedback and drew the attention of more attendees than initially expected, a follow-up Workshop would help to strengthen and extend the international connections HAI Net could build during this Ubicomp Tutorial.

A possible Conference that would fit nicely to both, the scope of the UbiCHAI Tutorial as well as the broad range of HAI Net, is the Augmented Humans conference.

The Augmented Humans community is also a rather young but vibrant community and has been around constantly for 10 years. With their goal to augment humans, Augmented Humans have a similar focus as the Humane AI Net community. As stated on their website: “The conference focuses on physical, cognitive, and perceptual augmentation of humans through digital technologies. The plural – humans – emphasizes the move towards technologies that enhance human capabilities beyond the individual and will have the potential for impact on a societal scale. The idea of augmenting the human intellect has a long tradition, the term was coined by Douglas Engelbart in 1962. Today, many of the technologies envisioned by Engelbart and others are commonplace, and looking towards the future, many technologies which amplify the human body and mind far beyond the original vision are within reach.”

The joint goals of Humane AI Net and Augmented Humans, of social, cognitive, and perceptual augmentation of the human, seem perfect to host a HAI Net International Workshop on Ubiquitous Technologies for Cognitive enhancement of Human-centred AI (UbiCHAI) at the Augmented Humans conference, to connect both communities.

We aim for a full-day workshop connecting researchers in the different aspects of Hybrid-Human-AI with Cognitive and Social Science to Augment the Human and provide a platform where research can be presented and new ideas can be developed in the scope of Cognitive perception of AI over into the fields of social behavior, health- and mental care, subject didactics, digitalization, economy, and others.

Results Summary

This workshop was a collaboration with the Cognitive and Developmental Psychology at the RPTU Kaiserslautern, and the Media Design, Keio University, Japan. After initial rejection from the Augmented Humans conference, we submitted the idea to this Workshop to the MobileHCI conference, which was hosted in Melbourne, Australia as well this year. One of the reasons to host this workshop in Australia was to build up connections for the Human AI Net network to Australia as well (after hosting events in Mexico and Japan). The workshop was quite successful and gained a lot of interest from the attendants of the conference including the local chairs and organizers of the conference attending the workshop. Thus our workshop turned into the largest workshop at the MobileHCI conference by far (25+ attendees). A highlight of the Workshop itself was that we could win Prof. Thad Starner from Georgia Tech as a Key-Note Speaker and attendee. The workshop theme was to look at methods to: sense, simulate, influence, and evaluate cognitive functions using Human-Centered AI. Cognitive functions refer to perception, attention, memory, language, problem solving, reasoning, and decision making. We had 8 paper submissions to the workshop and as a follow up to the work done in the workshop, one of the organizers (Passant Elagroudy) was invited to attend a Dagstuhl seminar in 2025 about cognitive augmentation.

Tangible Outcomes

  1. Passant Elagroudy, Agnes Grünerbl, Giulia Barbareschi, Jan Spilski, Kai Kunze, Thomas Lachmann, Paul Lukowicz: mobiCHAI – 1st International Workshop on Mobile Cognition-Altering Technologies (CAT) using Human-Centered AI. MobileHCI (Companion) 2024: 31:1-31:5 https://dl.acm.org/doi/abs/10.1145/3640471.3680462 
  2. Workshop in MobileHCI’24 in Melbourne, Australia http://ai-enhanced-cognition.com/mobichai/ https://mobilehci.acm.org/2024/acceptedworkshops.php

Contact person: Haris Papageorgiuo (haris@athenarc.gr)

Internal Partners:

  1. ATHENA RC, Haris Papageorgiou
  2. German Research Centre for Artificial Intelligence (DFKI), Julián Moreno Schneider
  3. OpenAIRE, Natalia Manola

 

SciNoBo is a microproject focused on enhancing science communication, particularly in health and climate change topics, by integrating AI systems with science journalism. The project aims to assist science communicators—such as journalists and policymakers—by utilizing AI to identify, verify, and simplify complex scientific statements found in mass media. By grounding these statements in scientific evidence, the AI will help ensure accurate dissemination of information to non-expert audiences. This approach builds on prior work involving neuro-symbolic question-answering systems and aims to leverage advanced language models, argumentation mining, and text simplification technologies. Technologically, we build on our previous MP work on neuro-symbolic Q&A (*) and further exploit and advance recent developments in instruction fine-tuning of large language models, retrieval augmentation and natural language understanding – specifically the NLP areas of argumentation mining, claim verification and text (ie, lexical and syntactic) simplification. The proposed MP addresses the topic of “Collaborative AI” by developing an AI system equipped with innovative NLP tools that can collaborate with humans (ie, science communicators -SCs) communicating statements on Health & Climate Change topics, grounding them on scientific evidence (Interactive grounding) and providing explanations in simplified language, thus, facilitating SCs in science communication. The innovative AI solution will be tested on a real-world scenario in collaboration with OpenAIRE by employing OpenAIRE research graph (ORG) services in Open Science publications.

Results Summary

The project is divided into two phases that ran in parallel. The main focus in phase I is the construction of the data collections and the adaptations and improvements needed in PDF processing tools. Phase II deals with the development of the two subsystems: claim analysis and text simplification as well as their evaluation.

  • Phase I: Two collections with News and scientific publications will be compiled in the areas of Health and Climate. The News collection will be built based on an existing dataset with News stories and ARC automated classification system in the areas of interest. The second collection with publications will be provided by OpenAIRE ORG service and further processed, managed and properly indexed by ARC SciNoBo toolkit. A small-scale annotation is foreseen by DFKI in support of the simplification subsystem.
  • Phase II: We developed, fine tuned and evaluated the two subsystems. Concretely, the “claim analysis” subsystem encompasses (i) ARC previous work on “claim identification”, (ii) a retrieval engine fetching relevant scientific publications (based on our previous miniProject), and (iii) an evidence-synthesis module indicating whether the publications fetched and the scientists’ claims therein, support or refute the News claim under examination.

 

Tangible Outcomes

  1. Kotitsas, S., Kounoudis, P., Koutli, E., & Papageorgiou, H. (2024, March). Leveraging fine-tuned Large Language Models with LoRA for Effective Claim, Claimer, and Claim Object Detection. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2540-2554).  https://aclanthology.org/2024.eacl-long.156/ 
  2. HCN dataset: news articles in the domain of Health and Climate Change. The dataset contains news articles, annotated with the major claim, claimer(s) and claim object(s). https://github.com/iNoBo/news_claim_analysis 
  3. Website demo: http://scinobo.ilsp.gr:1997/services 
  4. Services for claim identification and the retrieval engine http://scinobo.ilsp.gr:1997/live-demo?HFSpace=inobo-scinobo-claim-verification.hf.space 
  5. Service for the text simplification http://scinobo.ilsp.gr:1997/text-simplification 

Contact person: Fernando Martin Maroto, (Algebraic AI) (martin.maroto@algebraic.ai

Internal Partners:

  1. Algebraic, Fernando Martin
  2. Christian Weis (Technische Universität Kaiserslautern)  

 

Algebraic Machine Learning (AML) offers new opportunities in terms of transparency and control. However, that comes along with many challenges regarding software and hardware implementations. To understand the hardware needs of this new method it is essential to analyze the algorithm and its computational complexity. With this understanding, the final goal of this microproject is to investigate the feasibility of various hardware options particularly in-memory processing hardware acceleration for AML.

Results Summary

Sparse Crossing is a machine learning algorithm based on algebraic semantic embeddings. The goal of the collaboration is to first understand the needs and computational complexity of Sparse Crossing and then perform a feasibility analysis of various hardware options for an efficient implementation of the algorithm. Particularly, in-memory processing hardware acceleration and FPGA-based implementations have been considered. A report, and a FPGA based prototype has been developed (currently under patent).

Contact person: Richard Benjamins (richard.benjamins@telefonica.com)

Internal Partners:

  1. TID

External Partners:

  1. City Council of Valladolid
  2. City of Madrid
  3. National statistics office of Spain

 

This third micro project of WP6.10 focuses on assessing the ethical and social impact of the air quality system, which supports city governments to take data-driven decisions based to better manage challenges related to air quality. We want to, however, make sure that those decisions are fair and do not have undesired negative consequences such as boosting inequality and negatively affecting vulnerable groups. That is the objective of this last micro project in a series of three. The first micro project developed the prototype and the second validated it in a real city. We do the assessment for the city of Madrid because this city has more relevant data available. This will be a collaboration with WP5. For assessing the ethical in fact, we use open data from the city as well as from the National Spanish statistics office. Demographic data from census information such as gender, foreigners, age range, socioeconomic level, et cetera.

Contact person: Richard Benjamins (richard.benjamins@telefonica.com)

Internal Partners:

  1. TID

External Partners:

  1. City Council of Valladolid

 

This second micro project of WP6.10 will validate the air quality prototype developed in the first micro project with a second, real city, Valladolid in Spain. In principle, there will be no new developments except for feedback about the system from the city. This project is also in line with the objective of Humane AI: to shape the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. The focus of the project will be on insights generated from data collected with mobile air quality measurement stations to be placed on top of vehicles, and to test how does insides help local governments in better managing the challenges around air quality.Data from mobile air quality measurement stations that are placed on top of vehicles that and drive through all the streets of the city. Open data that is published by the local city of Valladolid. Aggregated and anonymized mobility data generated from a telecommunications network.The city of Valladolid has agreed to run a pilot for 6 months to evaluate the air quality platform developed in the first micro project with the City of Madrid. The interest of the city council is to monitor the air quality in the low-emission area and beyond to understand whether they can take additional measures for improving the air quality to what they currently are planning.

Results Summary

– A report with the results of the evaluation of the prototype.

– Adaptations to the system based on feedback from the local city.

– Dissemination activities jointly by Telefonica and the local city.

– Potential policy measures to improve the air quality in Valladolid.

Contact person:  Dirk Helbing (dirk.helbing@gess.ethz.ch)

Internal Partners:

  1. ETHZ, Elisabeth Stockinger, estockinger@ethz.ch
  2. FBK, Riccardo Gallotti, rgallotti@fbk.eu  

 

Despite all efforts to mitigate mis- and disinformation, they continue to be a substantial problem. This project contributed to the literature based on mis- and disinformation about social media with an analysis of the interaction effects between temporal rhythms of disinformation and social media usage in the context of the COVID-19 pandemic. Specifically, consider how mis- and disinformation spread on Twitter varies throughout the day and whether there are individual differences in users’ propensity to spread mis- and disinformation on Twitter based on the activity patterns.

Results Summary

We analysed a comprehensive dataset, examining the reliability of information relating to the COVID-19 pandemic shared on Twitter. We clustered users into pseudo-chronotypes based on their activity patterns on Twitter throughout the day, identified times of waking and prolonged waking states per cluster as well as times of increased susceptibility.We aggregated out results into a paper and submitted an extended abstract to the International Conference on Computational Social Science (https://www.ic2s2.org/ , accepted) and are in the process of preparing a paper for submission for a reputable journal. The project is related to the call on “Measuring, modeling, predicting the individual and collective effects of different forms of AI influence in socio-technical systems at scale,” addressing the human dimension of circadian and diurnal rhythms within social networks. Elisabeth Stockinger from ETHZ spent a 3-week mobility period at FBK in Trento, Italy, to work with the partner directly. She has continued the collaboration as a virtual visiting student.

Tangible Outcomes

  1. Stockinger, E., Gallotti, R. & Hausladen, C.I. Early morning hour and evening usage habits increase misinformation-spread. Sci Rep 14, 20233 (2024). https://doi.org/10.1038/s41598-024-69447-8 
  2. The code associated to the article can be found on Github: https://github.com/ethz-coss/diurnal-misinformation 
  3. The project was presented at the 9th International Conference on Computational Social Science (2023, http://www.ic2s2-2023.org/ ) and at the 34th Annual Meeting of the Society for Light Treatment and Biological Rhythms (2023, https://sltbr.org/ ).”

Contact person: Elisabeth Stockinger (elisabeth.stockinger@gess.ethz.ch )

Internal Partners:

  1. ETHZ, Elisabeth Stockinger, elisabeth.stockinger@gess.ethz.ch
  2. UMU, Virginia Dignum, virginia@cs.umu.se
  3. TU Delft, Jonne Maas, J.J.C.Maas@tudelft.nl

External Partners:

  1. University of Amsterdam Christopher Talvitie, christalvitie@gmail.com

 

Voting Advice Applications (VAAs) are increasingly popular throughout Europe. While commonly portrayed as impartial tools to measure issue agreement, their developers must take several design decisions at each step of the design process. Such decisions may include the selection of issues to incorporate into a questionnaire, the placement of candidates or parties on a political spectrum, or the algorithm measuring the distance between user and candidate. These decisions have to be made with great care, as they can cause substantial differences in the resulting list of recommendations.

As there is no known ground truth by which to measure different VAA designs, it is imperative that their design follows guidelines and best practices of pro-ethical design. Similarly, as VAAs aim to directly inform voter decisions in a democratic election, users must be able to trust the fulfillment of these guidelines based on the information available to them.

Results Summary

Firstly, we conduct an ethics assessment of several VAAs used in European countries, representing different design strategies. This assessment focuses on trustworthiness in the eyes of the electorate, focusing on user-centric documentation. By using the Ethics Guidelines for Trustworthy AI, we refer to a framework that is acknowledged by the democratic institutions of the countries hosting the VAAs and the respective elections, contributing to the democratic validity of a normative analysis of tools embedded in electoral processes.

Secondly, we identify the abstract criteria that a trustworthy VAA must fulfil according to the EGTAI, and accordingly evaluate a representative set of VAAs within Europe (StemWijzer, Kieskompas What2Vote, Smartvote, Wahl-O-Mat, Aftonbladets valkompass, HS Vaalikone and SVT Nyheters valkompass). None of the VAAs under investigation scored highly on the adapted EGTAI assessment list. For several requirements, many sub-requirements are not fulfilled by any VAA in this study. In particular, scores on societal and environmental well-being (R6) or accountability (R7) are low without significant differences between VAAs.

Thirdly, we present a list of recommendations based on these issues to contribute to future VAA development efforts. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.

Tangible Outcomes

  1. Stockinger, E., Maas, J., Talvitie, C. et al. Trustworthiness of voting advice applications in Europe. Ethics Inf Technol 26, 55 (2024). https://doi.org/10.1007/s10676-024-09790-6 
  2. Dataset showing the evaluated VAAs and the frameworks used to evaluate them https://static-content.springer.com/esm/art%3A10.1007%2Fs10676-024-09790-6/MediaObjects/10676_2024_9790_MOESM1_ESM.pdf 
  3. The code used for the analysis: https://github.com/ethz-coss/vaa-egtai-compliance 
  4. A video explaining Robust and Value-Based Political Guidance to the general public https://www.youtube.com/watch?v=5riTfDuRTlk&ab_channel=ComputationalSocialScienceETH
  5. The project was presented at:
    1. the Digital Democracy Workshop (2023, http://digdemlab.io/event/wk2023 )
    2. the Workshop on Co-Creating the Future: Participatory Cities and Digital Governance (2023, http://www.participatorycities.net )
    1. the 1st Twin Workshop on Ethics of Smart Cities and Smart Societies (2023, http://coss.ethz.ch/research/CoCi/TwinWorkshop )
    2. the HumanE AI Conference (2022, http://www.humane-ai.eu/event/humane-ai-conference ).

 

Contact person: Richard Niestroj, VW Data:Lab Munich, Yuanting Liu (liu@fortiss.org; yuanting.liu@fortiss.org

Internal Partners:

  1. Volkswagen AG, Richard Niestroj
  2. Consiglio Nazionale delle Ricerche (CNR), Mirco Nanni
  3. fortiss GmbH, Yuanting Liu  

 

The goal is to build a simulation environment to test connected car data based applications. AI based car data applications save people‘s time by guiding drivers and vehicles intelligently. This leads to a reduction of the environmental footprint of the transportation sector by reducing local and global emissions. The development and usage of a simulation environment enables data privacy compliancy for the development of AI based applications.

Tangible Outcomes

  1. Video presentation summarizing the project

Contact person: Haris Papageorgiou (Athena RC) (haris@athenarc.gr

Internal Partners:

  1. ATHENA RC,ILSP , Haris Papageorgiou, haris@athenarc.gr
  2. DFKI, Georg Rehm, georg.rehm@dfki.de

 

Knowledge discovery offers numerous challenges and opportunities. In the last decade, a significant number of applications have emerged relying on evidence from the scientific literature. ΑΙ methods offer innovative ways of applying knowledge discovery methods in the scientific literature facilitating automated reasoning, discovery and decision making on data. This micro-project focuses on the task of question answering (QA) for the biomedical domain. Our starting point is a neural QA engine developed by ILSP addressing experts’ natural language questions by jointly applying document retrieval and snippet extraction on a large collection of PUBMED articles, thus, facilitating medical experts in their work. DFKI will augment this system with a knowledge graph integrating the output of document analysis and segmentation modules. The knowledge graph will be incorporated in the QA system and used for exact answers and more efficient Human-AI interactions. We primarily focus upon scientific articles on Covid-19 and SARS-CoV-2.

Tangible Outcomes

  1. Video presentation summarizing the project