This project builds on earlier work by FBK in Trento on KENN <> and by VUA in Amsterdam <> and aims to combine the insights of both. The project has 3 aims, depending on difficulty we may achieve one, two or all three.

1. The current version of KENN uses the Godel t-conorm. We will develop versions of KENN based on other t-conorms (like the product t-conorm and Łukasiewicz), whose properties have been investigated in the earlier work by VUA. This should improve the performance of KENN.

2. We will try to extend the expressivity of the logical constraints in KENN from sets of clauses to implications, again using the earlier theoretical work by VUA. This should increase the reasoning capabilities of KENN.

3. It be should possible to check the exact contribution of each clause to the final predictions of KENN. This will increase explainability of KENN.


paper describing improvements to KENN, published in workshop or conference

software: new version of KENN


Project Partners:

  • Stichting VU,
  • Fondazione Bruno Kessler (FBK),

Primary Contact: Frank van Harmelen, Vrije Universiteit Amsterdam

Main results of micro project:

Project has run for less than 50% of its allocated time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contribution to the objectives of HumaneAI-net WPs

Project has run for less than 50% of its allocated time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Tangible outputs

  • Other: –

Results Description

KENN is a neuro-symbolic architecture developed in Trento. It allows to inject a knowledge-base when training a neural network. Theoretical work from Amsterdam has been used to improve KENN. As a result of using background knowledge from a knowledge we can train the neural network with many fewer training examples. Since KENN is based on fuzzy logic, a major bottleneck was the choice of the appropriate configuration of the logic (choice of norms and co-norms), since earlier work from Amsterdam had showed that some of the classical fuzzy logic configurations would perform very poorly in a machine learning setting (with large areas of their value space having a 0 gradient, or a 0 gradient for one of their input values).
As a result of the collaborations (visit from Amsterdam staff to Trento and vice versa), we have developed so called Fuzzy Refinement Functions). Such "refinement functions" are functions that change the truth value computed by a fuzzy logic operator in order to improve the gradient behaviour, while still maintaining the desired logical combinatorics. We have implemented such refinement functions in an algorithm called Iterative Local Refinement (ILR). Our experiments have shown that ILR finds refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not. Finally, ILR produces competitive results in the MNIST addition task.


Refining neural network predictions using background knowledge,
Alessandro Daniele, Emile van Krieken, Luciano Serafini & Frank van Harmelen
Machine Learning (2023)

Links to Tangible results

Publication at
Code and data at