Contact person: Dilhan Thilakarathne (dilhan.thilakarathne@ing.com)

Internal Partners:

  1. ING Groep NV, Dilhan Thilakarathne
  2. Umeå University (UMU), Andrea Aler Tubella  

 

After choosing a formal definition of fairness (we limit ourselves with definitions based on group fairness through equal resources or equal opportunities), one can attain fairness on the basis of this definition in two ways: directly incorporating the chosen definition into the algorithm through in-processing (as another constraint besides the usual error minimization; or using adversarial learning etc.) or introducing an additional layer to the pipeline through post-processing (considering the model as a black-box and focusing on its inputs and predictions to alter the decision boundary approximating the ideal fair outcomes, e.g. using a Glass-Box methodology).

We aim to compare both approaches, providing guidance on how best to incorporate fairness definitions into the design pipeline, focusing on the following research questions: Is there any qualitative difference between fairness acquired through in-processing and fairness attained by post-processing? What are the advantages of each method (e.g. performance, amenability to different fairness definitions)?

Results Summary

The work focuses on the choice between in-processing and post-processing showing that it is not value-free, as it has serious implications in terms of who will be affected by a fairness intervention. The work suggests how the translation of technical engineering questions into ethical decisions can concretely contribute to the design of fair models and the societal discussions around it.

The results of the experimental study provide evidences that are robust w.r.t. different implementations and discuss it for the case of a credit risk application. The results demonstrate how the translation of technical engineering questions into ethical decisions can concretely contribute to the design of fair models. At the same time, assessing the impacts of the resulting classification can have implications for the specific context of the original problem. We summarize our results in a paper addressing the difference between in-vs-post processing methods on ML models focusing on fairness vs performance trade-offs.

Tangible Outcomes

  1.  Ethical implications of fairness interventions: what might be hidden behind engineering choices?– Andrea Aler Tubella, Flavia Barsotti, Ruya Gokhan Kocer, Julian Alfredo Mendez
    https://doi.org/10.1007/s10676-022-09636-z
  2. Video presentation summarizing the project