Contact person: Andreas Theodorou (andreas.theodorou@upc.edu)
Internal Partners:
- Umeå University (UmU), Andreas Theodorou, andreas.theodorou@umu.se
External Partners:
- University of Bergen (UiB), Marija Slavkovik, marija.slavkovik@uib.no
- Open University of Cyprus (OUC), Loizos Michael, loizos@ouc.ac.cy
The right to contest a decision that has consequences on individuals or the society is a well-established democratic right. In the European Union, the General Data Protection Regulation explicitly requires the means of contesting decisions made by algorithmic systems. Contesting a decision is not a matter of simply providing explanation, but rather of assessing whether the decision and the explanation are permissible against an externally provided policy. Albeit its importance, little fundamental work has been done on developing the means for effectively contesting decisions. In this micro-project, we develop the foundations needed to integrate the contestability of decisions based on socio-ethical policy (e.g. Guidelines for Trustworthy Artificial Intelligence (AI)) into the decision-making system. This microproject will lay the basis for a line of research in contestability of algorithmic decision making by considering the overall ethical socio-legal aspects discussed in WP5 of the HumanE-AI-Net project. During the course of this microproject, we will achieve 3 objectives: 1) extend our work on formal language for socio-ethical values, concretised as norms and requirements; 2) conceptualise our feedback architecture which will monitor the predictions and decisions made by an AI system, check the predictions against a policy; and 3) a logic to evaluate black-box prediction based on formal socio-technical requirements by extending our previous work on monitoring and assessing decisions made by autonomous agents. The end result is an agent architecture which contains 4 core components: i) a predictor component, e.g. a neural network, able to produce recommendations for a course of action; ii) a decision-making component, which decides if and which action the agent should take; iii) a utility component, influencing the decision-making component by ascribing a utility value to a potential action to be taken; and iv) a ‘governor’ component; able to reason and suggest the acceptance or rejection of recommendations made by a predictor component. During the microproject, we focus on compliance checking but ensure our architecture is flexible and modular enough to facilitate extensions such as the governor component offering feedback for ‘retraining’ to the predictor component.
Results Summary
We have developed a framework aimed at facilitating appeals against the opaque operations of AI models, drawing on foundational work in contestable AI and adhering to regulatory mandates such as the General Data Protection Regulation (GDPR), which grants individuals the right to contest solely automated decisions. The aim is to extend the discourse on socio-ethical values in AI by conceptualizing a feedback architecture that monitors AI decisions and evaluates them against formal socio-technical requirements. Our results include a proposal for an appeal process and an argumentation model that supports reasoning with justifications and explanations, thereby enhancing the contestability of AI systems. Our work not only advances the theoretical foundations of contestable AI but also proposes practical steps towards implementing systems that respect individuals’ rights to challenge and understand AI decisions. The project has written a draft paper with the aim of submitting it to AAMAS’ blue-sky track.