Nowadays ML models are used in decision-making processes in real-world problems, by learning a function that maps the observed features with the decision outcomes. However these models usually do not convey causal information about the association in observational data, thus not being easily understandable for the average user, therefore not being possible to retrace the models’ steps, nor rely on its reasoning. Hence, it is natural to investigate more explainable methodologies, such as causal discovery approaches, since they apply processes that mimic human reasoning. For this reason, we propose the usage of such methodologies to create more explicable models that replicate human thinking, and that are easier for the average user to understand. More specifically, we suggest its application in methods such as decision trees and random forest, since by themselves are highly explainable correlation-based methods.
1 Conference Paper
- INESC TEC, Joao Gama
- Università di Pisa (UNIPI), Dino Pedreschi
- Consiglio Nazionale delle Ricerche (CNR), Fosca Giannotti
Primary Contact: Joao Gama, INESC TEC, University of Porto