This project builds on earlier work by FBK in Trento on KENN <https://arxiv.org/pdf/2009.06087.pdf> and by VUA in Amsterdam <https://arxiv.org/abs/2006.03472> and aims to combine the insights of both. The project has 3 aims, depending on difficulty we may achieve one, two or all three.

1. The current version of KENN uses the Godel t-conorm. We will develop versions of KENN based on other t-conorms (like the product t-conorm and Łukasiewicz), whose properties have been investigated in the earlier work by VUA. This should improve the performance of KENN.

2. We will try to extend the expressivity of the logical constraints in KENN from sets of clauses to implications, again using the earlier theoretical work by VUA. This should increase the reasoning capabilities of KENN.

3. It be should possible to check the exact contribution of each clause to the final predictions of KENN. This will increase explainability of KENN.

Output

paper describing improvements to KENN, published in workshop or conference

software: new version of KENN

Presentations

Project Partners:

  • Stichting VU, Frank.van.Harmelen@vu.nl
  • Fondazione Bruno Kessler (FBK), serafini@fbk.eu

Primary Contact: Frank van Harmelen, Vrije Universiteit Amsterdam

Main results of micro project:

Project has run for less than 50% of its allocated time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contribution to the objectives of HumaneAI-net WPs

Project has run for less than 50% of its allocated time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Tangible outputs

  • Other: –

Results Description

KENN is a neuro-symbolic architecture developed in Trento. It allows to inject a knowledge-base when training a neural network. Theoretical work from Amsterdam has been used to improve KENN. As a result of using background knowledge from a knowledge we can train the neural network with many fewer training examples. Since KENN is based on fuzzy logic, a major bottleneck was the choice of the appropriate configuration of the logic (choice of norms and co-norms), since earlier work from Amsterdam had showed that some of the classical fuzzy logic configurations would perform very poorly in a machine learning setting (with large areas of their value space having a 0 gradient, or a 0 gradient for one of their input values).
As a result of the collaborations (visit from Amsterdam staff to Trento and vice versa), we have developed so called Fuzzy Refinement Functions). Such "refinement functions" are functions that change the truth value computed by a fuzzy logic operator in order to improve the gradient behaviour, while still maintaining the desired logical combinatorics. We have implemented such refinement functions in an algorithm called Iterative Local Refinement (ILR). Our experiments have shown that ILR finds refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not. Finally, ILR produces competitive results in the MNIST addition task.

Publications

Refining neural network predictions using background knowledge,
Alessandro Daniele, Emile van Krieken, Luciano Serafini & Frank van Harmelen
Machine Learning (2023)

https://link.springer.com/article/10.1007/s10994-023-06310-3

Links to Tangible results

Publication at https://link.springer.com/article/10.1007/s10994-023-06310-3
Code and data at https://github.com/DanieleAlessandro/IterativeLocalRefinement