After choosing a formal definition of fairness (we limit ourselves with definitions based on group fairness through equal resources or equal opportunities), one can attain fairness on the basis of this definition in two ways: directly incorporating the chosen definition into the algorithm through in-processing (as another constraint besides the usual error minimization; or using adversarial learning etc.) or introducing an additional layer to the pipeline through post-processing (considering the model as a black-box and focusing on its inputs and predictions to alter the decision boundary approximating the ideal fair outcomes, e.g. using a Glass-Box methodology).

We aim to compare both approaches, providing guidance on how best to incorporate fairness definitions into the design pipeline, focusing on the following research questions: Is there any qualitative difference between fairness acquired through in-processing and fairness attained by post-processing? What are the advantages of each method (e.g. performance, amenability to different fairness definitions)?

Output

Paper: That addresses the difference between in-vs-post processing methods on ML models focusing on fairness vs performance trade-offs.

Presentations

Project Partners:

  • ING Groep NV, Dilhan Thilakarathne
  • Umeå University (UMU), Andrea Aler Tubella

Primary Contact: Dilhan Thilakarathne, ING Bank, NL

Main results of micro project:

The work focuses on the choice between in-processing and post-processing showing that it is not value-free, as it has serious implications in terms of who will be affected by a fairness intervention. The work suggests how the translation of technical engineering questions into ethical decisions can concretely contribute to the design of fair models and the societal discussions around it.

The results of the experimental study provide evidences that are robust w.r.t. different implementations and discuss it for the case of a credit risk application. The results demonstrate how the translation of technical engineering questions into ethical decisions can concretely contribute to the design of fair models. At the same time, assessing the impacts of the resulting classification can have implications for the specific context of the original problem.

Contribution to the objectives of HumaneAI-net WPs

T6.7. Finance domain related industrial use case that has many benefits on ML based applications where fairness is important.

T5.4. Promotes the importance of ethics in design and leads to future methods and tools for the value-based design and development of AI systems.

T5.5. Compatible with our vision of responsible AI by design.

Tangible outputs

  • Publication: Bias mitigation: in-processing or post-processing? Ethical decisions hidden behind engineering choices – Andrea Aler Tubella, Flavia Barsotti, Ruya Gokhan Kocer, Julian Alfredo Mendez