Reaching movements towards targets located in the 3-dimensional space are fast and accurate. Although they may seem simple and natural movements, they imply the integration of different sensory information that is carried in real time by our brain. We will apply machine learning techniques to address different questions as it follows: i) at which point of the movement is it accurately possible to predict the final target goal in static and dynamic conditions? ii) as at behavioural level it was hypothesized that direction and depth dimension do not rely on shared networks in the brain during the execution of movement but they are processed separately, can the targets located along the horizontal or sagittal dimension be predicted with the same or different accuracy? Finally, we will frame our result in the context of improving user-agent interactions, moving from a description of human movement to a possible implementation in social/collaborative AI.

Output

a model descriptive of reaching movement in static and dynamic conditions

a research paper submitted on a relevant journal of the sector

Presentations

Project Partners:

  • Università di Bologna (UNIBO), Patrizia Fattori
  • German Research Centre for Artificial Intelligence (DFKI), Elsa Kirchner

Primary Contact: Patrizia Fattori, University of Bologna

Main results of micro project:

We measured the kinematics of reaching movement in 12 participants towards visual targets located in the 3D-space. The targets could remain static or be perturbed at the movement onset. Experiment 1: by a supervised recurrent neural network, we tested at what point, during the movement, it was possible to accurately detect the reaching endpoints given the instantaneous x, y, z coordinates of the index and wrist. The classifier successfully predicted static and perturbed reaching endpoints with progressive increasing accuracy across movement execution (mean accuracy = 0.560.19, chance level = 0.16). Experiment 2: using the same network architecture, we trained a regressor to predict the future x, y, z position of the index and wrist given the actual x, y, z positions.  X, y and z components of index and wrist showed an average Rsquared higher than 0.9 suggesting an optimal reconstruction of future trajectory given the actual one.

Contribution to the objectives of HumaneAI-net WPs

In this microproject, the action goal could change its spatial position during action execution and in unpredictable way. Using a double neural network approach, the present results contribute to the objectives of the Task 2.2 of WP2 at two levels. In the first level, we described the temporal structure of action goal recognition in static and perturbed condition of reaching from movement kinematics of the index and wrist in the 3-dimensional space. This first achievement contributes to the recognition of the action goal in a context that is known (static targets) or not, a priori (perturbed targets). In the second level, we predicted future trajectory of the movement given previous action path. This second achievement contributes to the creation of the bases for the design of a system able to monitor activity in a natural human workspace and extract prediction of future actions in situations that could require human-AI interaction.

Tangible outputs