Nowadays ML models are used in decision-making processes in real-world problems, by learning a function that maps the observed features with the decision outcomes. However these models usually do not convey causal information about the association in observational data, thus not being easily understandable for the average user, therefore not being possible to retrace the models’ steps, nor rely on its reasoning. Hence, it is natural to investigate more explainable methodologies, such as causal discovery approaches, since they apply processes that mimic human reasoning. For this reason, we propose the usage of such methodologies to create more explicable models that replicate human thinking, and that are easier for the average user to understand. More specifically, we suggest its application in methods such as decision trees and random forest, since by themselves are highly explainable correlation-based methods.

Output

1 Conference Paper

1 Prototype

Dataset Repository

Project Partners:

  • INESC TEC, University of Porto, Joao Gama
  • INESC TEC, Joao Gama
  • University Pisa, Dino Pedreschi
  • CNR Pisa, Fosca Giannotti

Primary Contact: Joao Gama, INESC TEC, University of Porto