Reaching movements towards targets located in the 3-dimensional space are fast and accurate. Although they may seem simple and natural movements, they imply the integration of different sensory information that is carried in real time by our brain. We will apply machine learning techniques to address different questions as it follows: i) at which point of the movement is it accurately possible to predict the final target goal in static and dynamic conditions? ii) as at behavioural level it was hypothesized that direction and depth dimension do not rely on shared networks in the brain during the execution of movement but they are processed separately, can the targets located along the horizontal or sagittal dimension be predicted with the same or different accuracy? Finally, we will frame our result in the context of improving user-agent interactions, moving from a description of human movement to a possible implementation in social/collaborative AI.

Output

a model descriptive of reaching movement in static and dynamic conditions

a research paper submitted on a relevant journal of the sector

Presentations

Project Partners:

  • Università di Bologna (UNIBO), Patrizia Fattori
  • German Research Centre for Artificial Intelligence (DFKI), Elsa Kirchner

 

Primary Contact: Patrizia Fattori, University of Bologna

Main results of micro project:

We measured the kinematics of reaching movement in 12 participants towards visual targets located in the 3D-space. The targets could remain static or be perturbed at the movement onset. Experiment 1: by a supervised recurrent neural network, we tested at what point, during the movement, it was possible to accurately detect the reaching endpoints given the instantaneous x, y, z coordinates of the index and wrist. The classifier successfully predicted static and perturbed reaching endpoints with progressive increasing accuracy across movement execution (mean accuracy = 0.560.19, chance level = 0.16). Experiment 2: using the same network architecture, we trained a regressor to predict the future x, y, z position of the index and wrist given the actual x, y, z positions.  X, y and z components of index and wrist showed an average Rsquared higher than 0.9 suggesting an optimal reconstruction of future trajectory given the actual one.

Contribution to the objectives of HumaneAI-net WPs

In this microproject, the action goal could change its spatial position during action execution and in unpredictable way. Using a double neural network approach, the present results contribute to the objectives of the Task 2.2 of WP2 at two levels. In the first level, we described the temporal structure of action goal recognition in static and perturbed condition of reaching from movement kinematics of the index and wrist in the 3-dimensional space. This first achievement contributes to the recognition of the action goal in a context that is known (static targets) or not, a priori (perturbed targets). In the second level, we predicted future trajectory of the movement given previous action path. This second achievement contributes to the creation of the bases for the design of a system able to monitor activity in a natural human workspace and extract prediction of future actions in situations that could require human-AI interaction.

Tangible outputs

In this micro-project, we propose investigating human recollection of team meetings and how conversational AI could use this information to create better team cohesion in virtual settings.

Specifically, we would like to investigate how a person’s emotion, personality, relationship to fellow teammates, goal and position in the meeting influences how they remember the meeting. We want to use this information to create memory aware conversational AI that could leverage such data to increase team cohesion in future meetings.

To achieve this goal, we plan first to record a multi-modal data-set of team meetings in a virtual-setting. Second, administrate questionnaires to participants in different time intervals succeeding a session. Third, annotate the corpus. Fourth, carry out an initial corpus analysis to inform the design of memory-aware conversational AI.

This micro-project will contribute to a longer-term effort in building a computational memory model for human-agent interaction.

Output

A corpus of repeated virtual team meetings (6 sessions spaced, 1 week each)

manual annotations (people’s recollection of the team meeting etc.)

automatic annotations (e.g. eye-gaze, affect, body posture etc.)

A paper describing the corpus and insights gained on the design of memory-aware agents from initial analysis

Project Partners:

  • TU Delft, Catholijn Jonker
  • Eötvös Loránd University (ELTE), Andras Lorincz

 

Primary Contact: Catharine Oertel, TU Delft

Main results of micro project:

1) A corpus of repeated virtual team meetings (4 sessions spaced, 4 days apart each).
2) Manual annotations (people's recollection of the team meeting etc.)
3) Automatic annotations (e.g. eye-gaze, affect, body posture etc.)
4)A preliminary paper describing the corpus and insights gained on the design of memory-aware agents from initial analysis

Contribution to the objectives of HumaneAI-net WPs

In this micro-project, we propose investigating human recollection of team meetings and how conversational AI could use this information to create better team cohesion in virtual settings.
Specifically, we would like to investigate how a person's emotion, personality, relationship to fellow teammates, goal and position in the meeting influences how they remember the meeting. We want to use this information to create memory aware conversational AI that could leverage such data to increase team cohesion in future meetings.
To achieve this goal, we plan first to record a multi-modal data-set of team meetings in a virtual-setting. Second, administrate questionnaires to participants in different time intervals succeeding a session. Third, annotate the corpus. Fourth, carry out an initial corpus analysis to inform the design of memory-aware conversational AI.
This micro-project will contribute to a longer-term effort in building a computational memory model for human-agent interaction.

Tangible outputs

  • Dataset: MEMO – Catharine Oertel
  • Publication: MEMO dataset paper – Catharine Oertel
  • Program/code: Memo feature extraction code – Andras Lorincx

Transformers and self-attention (Vaswani et al., 2017), have become the dominant approach for natural language processing (NLP) with systems such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020) rapidly displacing more established RNN and CNN structures with an architecture composed of stacked encoder-decoder modules using self-attention.

This micro-project will assess tools and data sets for experiments and a first initial demonstration of the potential of transformers for multimodal perception and multimodal interactions. We explore research challenges, benchmark data sets and performance metrics for multimodal perception and modeling tasks such as (1) audio-visual narration of scenes, actions and activities, (2) audio-video recordings of lectures and TV programs (3) perception and evocation of engagement, attention, and emotion.

(full description and bibliography exceeds 200 words – available on request).

Presentations

Project Partners:

  • Institut national de recherche en sciences et technologies du numérique (INRIA), James Crowley
  • Eötvös Loránd University (ELTE), Andras Lorincz
  • Université Grenoble Alpes (UGA), Fabien Ringeval
  • Centre national de la recherche scientifique (CNRS), François Yvon
  • Institut “Jožef Stefan” (JSI), Marko Grobelnik

Primary Contact: James Crowley, INRIA

Main results of micro project:

This micro-project has explored the potential of transformers for multimodal perception and interaction to support Humane AI, providing
1) A tutorial on the use of transformers for multimodal interaction, and
2) A report on available tools for experiments.
3) A survey of data sets and research challenges for experiments.
The result has opened a new approach to building practical tools for interaction and collaboration between people and intelligent systems.

Contribution to the objectives of HumaneAI-net WPs

This microproject has promoted the use of a transformers and self attention for multimodal modal interaction by Humane AI Net researchers, by identifying relevant tools and benchmark data sets, by providing tutorials and training materials for education, and by identifying research challenges for multimodal perception and interaction with Transformers.

Tangible outputs

Project Description (150 words)

Methods for injecting constraints in Machine Learning (ML) can help bridging the gap between symbolic and subsymbolic models, and address fairness and safety issues in data-driven AI systems. The recently proposed Moving Targets approach achieves this via a decomposition, where a classical ML model deals with the data and a separate constraint solver with the constraints.

Different applications call for different constraints, solvers, and ML models: this flexibility is a strength of the approach, but it makes it also difficult to set up and analyze.

Therefore, this project will rely on the AI Domain Definition Language (AIDDL) framework to obtain a flexible implementation of the approach, making it simpler to use and allowing the exploration of more case studies, different constraint solvers, and algorithmic variants. We will use this implementation to investigate various new constraint types integrated with the Moving Targets approach (e.g. SMT, MINLP, CP).

Output

Stand-alone moving targets system distributed via the AI4EU platform

Interactive tutorial to be available on the AI4EU platform

Scientific paper discussing the outcome of our evaluation and the resulting system

Presentations

Project Partners:

  • Örebro University (ORU), Uwe Köckemann
  • Università di Bologna (UNIBO), Michele Lombardi

 

Primary Contact: Uwe Köckemann, Örebro University

Main results of micro project:

The moving targets method integrates machine learning and constraint optimization to enforce constraints on a machine learning model. The AI Domain Definition Language (AIDDL) provides a modeling language and framework for integrative AI.

We have implemented the moving targets algorithm in the AIDDL framework for integrative AI. This has benefits for modeling, experimentation, and usability. On the modeling side, this enables us to provide applications of “moving target” as regular machine learning problems extended with constraints and a loss function. On the experimentation side, we can now easily switch the learning and constraint solvers used by the “moving targets” algorithm, and we have added support for multiple constraint types. Finally, we made the “moving targets” method easier to use, since it can now be controlled through a small model written in the AIDDL language.

Our tangible outcomes are listed below.

Contribution to the objectives of HumaneAI-net WPs

T1.1 (Linking Symbolic and Subsymbolic Learning)

Moving targets provides a convenient approach to enforce constraint satisfaction in subsymbolic ML methods, within the limits of model bias. Our AIDDL integration pulls this idea all the way to the modeling level where, e.g., a fairness constraint can be added with a single line.

T1.4 (Compositionality and Auto ML)

The moving targets method, combined with an easy way of modeling constraints via
AIDDL may increase trust in fully automated machine learning pipelines.

T2.6 (Dealing with Lack of Training Data)

Training data may be biased in a variety of ways depending on how it was collected. We provide a convenient way to experiment with constraining such data sets and possibly overcome unwanted bias due to lack of data.

Tangible outputs

Understanding the mechanism of the neural correlates during human physical activities is important for providing safety in industrial factory environments considering brain activity during lifting a weight. Moreover, different responses to the same task can be observed due to physiological and neurological differences among individuals. In this project, the change pattern in EEG will be investigated during lifting of a weight and the features in EEG data making difference during lifting a weight will be analyzed. Classification between lifting and no lifting cases will be realized by using deep learning based machine learning methods. The outcomes of the project can be applied in industrial exoskeleton applications as well as physical rehabilitation of stroke patients.

Output

Dataset Repository (Share on AI4EU)

Conference Paper / Journal Article

Presentations

Project Partners:

  • Türkiye Bilimsel ve Teknolojik Araştırma Kurumu (TUBITAK), Sencer Melih Deniz
  • German Research Centre for Artificial Intelligence (DFKI), Paul Lukowicz

 

Primary Contact: Sencer Melih Deniz, TUBITAK BILGEM

Main results of micro project:

The project has run for almost 50% of its allocated time and has yet to be completed. Within this time duration, the following steps were completed:
1. Experimental paradigm was designed to achieve the project goals.
2. Study preparation including hardware and software development was completed.
3. Data recording session has been started and is in progress. Data from a total of 10 people has been obtained so far. More participants will be included in data acquisition to achieve the desired result.

The dataset and results will be evaluated once the data acquisition is completed.

Contribution to the objectives of HumaneAI-net WPs

This project is also part of WP2 with task numbers T2.2, T2.3.

This project aims to contribute to WP2 and WP6 by investigating the use case of EEG signal and AI models in the detection of various aspects of physical activities during weightlifting. To investigate pattern change in EEG during weightlifting will be aimed at providing more information in prediction of intended and actual human actions during sensori-motor tasks. Doing so, a common research question is aimed to be applied to the more industrial use cases such as control of exoskeletons. Moreover, outcomes of the project can be used for contribution in increasing mobility in stroke patients and disabled people as related with healthy living and mobility.

Tangible outputs

  • Program/code: Data Acquisition Software Code – Juan Felipe Vargas Colorado

Attachments

PresentMovie_NeuralMech_WLlifting_TUBITAK_DFK_Berlin.m4v

HumanE-AI research needs data to advance. Often, researcher struggle to progress for the lack of data. At the same time, collecting a rich and accurate dataset is no easy task. Therefore, we propose to share through the AI4EU platform the datasets already collected so far by different research groups. The datasets will be curated to be ready-to-use for researchers.

Possible extension and variation of such datasets will also be generated using artificial techniques and published on the platform.

A performance baseline will be provided for each dataset, in form of publication reference, developed model or written documentation.

The relevant legal framework will be investigated with specific attention to privacy and data protection, as to highlight limitations and challenges for the use and extension of existing datasets as well as future data collection on the subject of multimodal data collection for perception modelling.

Output

Publication of OPPORTUNITY dataset (and other datasets if time available) on the AI4EU platform. [lead: UoS, contributor: DFKI]

Publication of baseline performance pipeline for OPPORTUNITY dataset (and other datasets if time available) on AI4EU platform. [lead: UoS, contributor: DFKI]

Investigation of data loader and pipeline integration on AI4EU experiment to load HAR dataset and pre-existent pipelines, with a focus on the opportunity dataset (and other datasets if time available) [lead: UoS, contributor: DFKI]

Generation of variation [lead: DFKI]

Survey publications describing datasets and performance baseline [lead: DFKI, contributor: UoS]

Presentations

Project Partners:

  • University of Sussex (UOS), Mathias Ciliberto
  • German Research Centre for Artificial Intelligence (DFKI), Vitor Fortes Rey
  • Vrije Universiteit Brussel (VUB), Arno de Bois

 

Primary Contact: Mathias Ciliberto, University of Sussex

Main results of micro project:

Collection, curation and publication of 4 datasets for Multi Modal Perception and Modeling (WP2):
– OPPORTUNITY++:
– activity of daily living
– sensor rich
– New additional anonymised, annotated video with OpenPose tracks
– Capacitive Gym:
– 7 popular gym workouts
– 11 subjects, each with separate 5 days
– Capacitive sensors in 3 position
– New dataset
– HCI FreeHand dataset:
– Freehand synthetic gestures
– Multiple 3D accelerometers
– SkodaMini dataset:
– Car manufacturing gestures
– Multiple 3D accelerometer and gyroscope
– Beach volleyball (https://ieee-dataport.org/open-access/wearlab-beach-volleyball-serves-and-games)

Contribution to the objectives of HumaneAI-net WPs

Multi-modal perception and modeling needs data to progress, but recording a new rich and accurate dataset allowing for comparative evaluations by the scientific community is no easy task. Therefore, we gathered rich datasets for multimodal perception and modelling of human activities and gestures. We curated the dataset in order to make them easy to use for research thanks to clear documentation and file formats.
The highlight of this microproject is the OPPORTUNITY++ dataset of activities of daily living, a multi-modal extension of the well-established OPPORTUNITY dataset. We enhanced this dataset which contains wearable sensor data, with previously unreleased data, including video and motion tracking data, which make OPPORTUNITY++ a truly multi-modal dataset with wider appeal, such as to the computer vision community.
In addition, we released other well established activity datasets (HCI FreeHand and SkodaMini dataset) as well as datasets involving novel sensor modalities (CapacitiveGym) and skill-assessment dataset (Wearlab BeachVolleyball)

Tangible outputs

Nowadays ML models are used in decision-making processes in real-world problems, by learning a function that maps the observed features with the decision outcomes. However these models usually do not convey causal information about the association in observational data, thus not being easily understandable for the average user, therefore not being possible to retrace the models’ steps, nor rely on its reasoning. Hence, it is natural to investigate more explainable methodologies, such as causal discovery approaches, since they apply processes that mimic human reasoning. For this reason, we propose the usage of such methodologies to create more explicable models that replicate human thinking, and that are easier for the average user to understand. More specifically, we suggest its application in methods such as decision trees and random forest, since by themselves are highly explainable correlation-based methods.
na

Output

1 Conference Paper

1 Prototype

Dataset Repository

Project Partners:

  • INESC TEC, Joao Gama
  • Università di Pisa (UNIPI), Dino Pedreschi
  • Consiglio Nazionale delle Ricerche (CNR), Fosca Giannotti

 

Primary Contact: Joao Gama, INESC TEC, University of Porto

Main results of micro project:

1) Journal paper submitted to WiRES – data mining and knowledge discovery:
Methods and Tools for Causal Discovery and Causal Inference
Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, João Gama
(under evaluation)

2) Github repository of datasets, software, and papers related to causal discovery and causal inference research

https://github.com/AnaRitaNogueira/Methods-and-Tools-for-Causal-Discovery-and-Causal-Inference

Contribution to the objectives of HumaneAI-net WPs

The HumanE-AI project thinks a society of increasing interactions between humans and artificial agents. All around the project, causal models are relevant for plausible models of human behavior, man-machine explanations, and upgrading machine-learning algorithms with causal-inference mechanisms.

The output of the micro-project presents a deep study about causal discovery and causal inference. Moreover, the github repository of datasets, papers, and code will be an excellent source of resources for those want to study the topic.

Tangible outputs