Project Description (150 words)

Methods for injecting constraints in Machine Learning (ML) can help bridging the gap between symbolic and subsymbolic models, and address fairness and safety issues in data-driven AI systems. The recently proposed Moving Targets approach achieves this via a decomposition, where a classical ML model deals with the data and a separate constraint solver with the constraints.

Different applications call for different constraints, solvers, and ML models: this flexibility is a strength of the approach, but it makes it also difficult to set up and analyze.

Therefore, this project will rely on the AI Domain Definition Language (AIDDL) framework to obtain a flexible implementation of the approach, making it simpler to use and allowing the exploration of more case studies, different constraint solvers, and algorithmic variants. We will use this implementation to investigate various new constraint types integrated with the Moving Targets approach (e.g. SMT, MINLP, CP).

Output

Stand-alone moving targets system distributed via the AI4EU platform

Interactive tutorial to be available on the AI4EU platform

Scientific paper discussing the outcome of our evaluation and the resulting system

Presentations

Project Partners:

  • Örebro University (ORU), Uwe Köckemann
  • Università di Bologna (UNIBO), Michele Lombardi

 

Primary Contact: Uwe Köckemann, Örebro University

Main results of micro project:

The moving targets method integrates machine learning and constraint optimization to enforce constraints on a machine learning model. The AI Domain Definition Language (AIDDL) provides a modeling language and framework for integrative AI.

We have implemented the moving targets algorithm in the AIDDL framework for integrative AI. This has benefits for modeling, experimentation, and usability. On the modeling side, this enables us to provide applications of “moving target” as regular machine learning problems extended with constraints and a loss function. On the experimentation side, we can now easily switch the learning and constraint solvers used by the “moving targets” algorithm, and we have added support for multiple constraint types. Finally, we made the “moving targets” method easier to use, since it can now be controlled through a small model written in the AIDDL language.

Our tangible outcomes are listed below.

Contribution to the objectives of HumaneAI-net WPs

T1.1 (Linking Symbolic and Subsymbolic Learning)

Moving targets provides a convenient approach to enforce constraint satisfaction in subsymbolic ML methods, within the limits of model bias. Our AIDDL integration pulls this idea all the way to the modeling level where, e.g., a fairness constraint can be added with a single line.

T1.4 (Compositionality and Auto ML)

The moving targets method, combined with an easy way of modeling constraints via
AIDDL may increase trust in fully automated machine learning pipelines.

T2.6 (Dealing with Lack of Training Data)

Training data may be biased in a variety of ways depending on how it was collected. We provide a convenient way to experiment with constraining such data sets and possibly overcome unwanted bias due to lack of data.

Tangible outputs