After choosing a formal definition of fairness (we limit ourselves with definitions based on group fairness through equal resources or equal opportunities), one can attain fairness on the basis of this definition in two ways: directly incorporating the chosen definition into the algorithm through in-processing (as another constraint besides the usual error minimization; or using adversarial learning etc.) or introducing an additional layer to the pipeline through post-processing (considering the model as a black-box and focusing on its inputs and predictions to alter the decision boundary approximating the ideal fair outcomes, e.g. using a Glass-Box methodology).
We aim to compare both approaches, providing guidance on how best to incorporate fairness definitions into the design pipeline, focusing on the following research questions: Is there any qualitative difference between fairness acquired through in-processing and fairness attained by post-processing? What are the advantages of each method (e.g. performance, amenability to different fairness definitions)?
Paper: That addresses the difference between in-vs-post processing methods on ML models focusing on fairness vs performance trade-offs.
- ING Bank, NL, Dilhan Thilakarathne
- ING, Dilhan Thilakarathne
- Umeå, Andrea Aler Tubella
Primary Contact: Dilhan Thilakarathne, ING Bank, NL