LLOD (Linguistic Linked Open Data) is a generic name for a set of mutually connected language resources, using ontological relations. The connections between concepts and between concepts and their expression in natural language make them suitable for both research and industrial applications in the area of content analysis, natural language understanding, (language- and knowledge-based) inferencing and other tasks. In the presented task, the concrete work will be on converting the SynSemClass project dataset (in part as a result of a previous Humane AI Net microproject called META-O-NLU) into LLOD, connecting it to the huge amount or interlinked data already available. A partner is involved in the Prêt-à-LLOD H2020 project, making this project synergistic in nature and multiplicative in terms of results in previous projects. Partners are also involved in the COST Action “European network for Web-centered linguistic data science” (NexusLinguarum).

Output

– SynSemClass (1000 classes min.) in LLOD / PreMON / OntoLex-Lemon ontological model, to be integrated in the LLOD. Covering 4 languages (CZ, EN, DE and ES)

– Tools for conversion from the XML original format to RDF and OWL, editing and checking

– Paper at some major 2023 conference (*ACL, *AI, LREC, ISWC, LDK…) or dedicated workshop (LAW, *SEM, Linked Data in Linguistics (LDL)), LRE Journal, Semantic Web Journal

Project Partners:

  • Charles University Prague, Jan Hajic
  • German Research Centre for Artificial Intelligence (DFKI), Thierry deClerck

 

Primary Contact: Jan Hajič, Charles Unversity

While Natural LanguageProcessing (NLP) is already a well-developed field, the problem of using NLP methods for narrative analysis has not yet been satisfactorily solved. In the spirit of the HumanE-AI project, in the current microproject we lay the groundwork for developing a new approach to narrative analysis providing a gray-box (at least partially explainable) NLP model tailored for facilitating work of qualitative text/narrative analysts. We conduct a proof-of-concept study combining existing standard NLP methods (e.g. topic modeling, entity recognition) with qualitative analysis of narratives about smart cities and related technologies and use this experience to conceptualize our approach to narrative analysis, in particular with respect to problems which are not easily solved with the existing tools. Crucially, the current initial research will be followed-up by a next microproject dedicated to formalizing our approach to narrative analysis and developing its open-source implementation (for Python).

Output

Submitted paper on narrative analysis of articles on smart cities and related technologies

Project Partners:

  • Institut national de recherche en sciences et technologies du numérique (INRIA), James Crowley

 

Primary Contact: Andrzej Nowak, University of Warsaw

Reaching movements towards targets located in the 3-dimensional space are fast and accurate. Although they may seem simple and natural movements, they imply the integration of different sensory information that is carried in real time by our brain. We will apply machine learning techniques to address different questions as it follows: i) at which point of the movement is it accurately possible to predict the final target goal in static and dynamic conditions? ii) as at behavioural level it was hypothesized that direction and depth dimension do not rely on shared networks in the brain during the execution of movement but they are processed separately, can the targets located along the horizontal or sagittal dimension be predicted with the same or different accuracy? Finally, we will frame our result in the context of improving user-agent interactions, moving from a description of human movement to a possible implementation in social/collaborative AI.

Output

a model descriptive of reaching movement in static and dynamic conditions

a research paper submitted on a relevant journal of the sector

Presentations

Project Partners:

  • Università di Bologna (UNIBO), Patrizia Fattori
  • German Research Centre for Artificial Intelligence (DFKI), Elsa Kirchner

Primary Contact: Patrizia Fattori, University of Bologna

Main results of micro project:

We measured the kinematics of reaching movement in 12 participants towards visual targets located in the 3D-space. The targets could remain static or be perturbed at the movement onset. Experiment 1: by a supervised recurrent neural network, we tested at what point, during the movement, it was possible to accurately detect the reaching endpoints given the instantaneous x, y, z coordinates of the index and wrist. The classifier successfully predicted static and perturbed reaching endpoints with progressive increasing accuracy across movement execution (mean accuracy = 0.560.19, chance level = 0.16). Experiment 2: using the same network architecture, we trained a regressor to predict the future x, y, z position of the index and wrist given the actual x, y, z positions.  X, y and z components of index and wrist showed an average Rsquared higher than 0.9 suggesting an optimal reconstruction of future trajectory given the actual one.

Contribution to the objectives of HumaneAI-net WPs

In this microproject, the action goal could change its spatial position during action execution and in unpredictable way. Using a double neural network approach, the present results contribute to the objectives of the Task 2.2 of WP2 at two levels. In the first level, we described the temporal structure of action goal recognition in static and perturbed condition of reaching from movement kinematics of the index and wrist in the 3-dimensional space. This first achievement contributes to the recognition of the action goal in a context that is known (static targets) or not, a priori (perturbed targets). In the second level, we predicted future trajectory of the movement given previous action path. This second achievement contributes to the creation of the bases for the design of a system able to monitor activity in a natural human workspace and extract prediction of future actions in situations that could require human-AI interaction.

Tangible outputs

Project Description (150 words)

Methods for injecting constraints in Machine Learning (ML) can help bridging the gap between symbolic and subsymbolic models, and address fairness and safety issues in data-driven AI systems. The recently proposed Moving Targets approach achieves this via a decomposition, where a classical ML model deals with the data and a separate constraint solver with the constraints.

Different applications call for different constraints, solvers, and ML models: this flexibility is a strength of the approach, but it makes it also difficult to set up and analyze.

Therefore, this project will rely on the AI Domain Definition Language (AIDDL) framework to obtain a flexible implementation of the approach, making it simpler to use and allowing the exploration of more case studies, different constraint solvers, and algorithmic variants. We will use this implementation to investigate various new constraint types integrated with the Moving Targets approach (e.g. SMT, MINLP, CP).

Output

Stand-alone moving targets system distributed via the AI4EU platform

Interactive tutorial to be available on the AI4EU platform

Scientific paper discussing the outcome of our evaluation and the resulting system

Presentations

Project Partners:

  • Örebro University (ORU), Uwe Köckemann
  • Università di Bologna (UNIBO), Michele Lombardi

 

Primary Contact: Uwe Köckemann, Örebro University

Main results of micro project:

The moving targets method integrates machine learning and constraint optimization to enforce constraints on a machine learning model. The AI Domain Definition Language (AIDDL) provides a modeling language and framework for integrative AI.

We have implemented the moving targets algorithm in the AIDDL framework for integrative AI. This has benefits for modeling, experimentation, and usability. On the modeling side, this enables us to provide applications of “moving target” as regular machine learning problems extended with constraints and a loss function. On the experimentation side, we can now easily switch the learning and constraint solvers used by the “moving targets” algorithm, and we have added support for multiple constraint types. Finally, we made the “moving targets” method easier to use, since it can now be controlled through a small model written in the AIDDL language.

Our tangible outcomes are listed below.

Contribution to the objectives of HumaneAI-net WPs

T1.1 (Linking Symbolic and Subsymbolic Learning)

Moving targets provides a convenient approach to enforce constraint satisfaction in subsymbolic ML methods, within the limits of model bias. Our AIDDL integration pulls this idea all the way to the modeling level where, e.g., a fairness constraint can be added with a single line.

T1.4 (Compositionality and Auto ML)

The moving targets method, combined with an easy way of modeling constraints via
AIDDL may increase trust in fully automated machine learning pipelines.

T2.6 (Dealing with Lack of Training Data)

Training data may be biased in a variety of ways depending on how it was collected. We provide a convenient way to experiment with constraining such data sets and possibly overcome unwanted bias due to lack of data.

Tangible outputs