This project aims to take seriously the fact that the development and deployment of AI systems is not above the law, as decided in constitutional democracies. This feeds into the task of addressing the question of incorporation of fundamental rights protection into the architecture of AI systems including (1) checks and balances of the Rule of Law and (2) requirements imposed by positive law that elaborates fundamental rights protection.
A key result of this task will be a report on a coherent set of design principles firmly grounded in relevant positive law, with a clear emphasis on European law (both EU and Council of Europe). To help developers understand the core tenets of the EU legal framework, we have developed two tutorials, one in 2020 on Legal Protection by Design in relation to EU data protection law [hyperlink to Tutorial 2020] and one in 2021 on the European Commission’s proposal of an EU AI Act [hyperlink to Tutorial 2021]. In the Fall of 2022 we will follow up with a Tutorial on the proposed EU AI Liability Directive.
Our findings will entail: - A sufficiently detailed overview of legally relevant roles, such as end-users, targeted persons, software developers, hardware manufacturers, those who put AI applications on the market, platforms that integrate service provision both vertical and horizontal, providers of infrastructure (telecom providers, cloud providers, providers of cyber-physical infrastructure, smart grid providers, etc.);
A sufficiently detailed legal vocabulary, explained at the level of AI applications, such as legal subjects, legal objects, legal rights and obligations, private law liability, fundamental rights protection; - High level principles that anchor the Rule of Law: transparency (e.g. explainability, preregistration of research design), accountability (e.g. clear attribution of tort liability, fines by relevant supervisors, criminal law liability), contestability (e.g. the repertoire of legal remedies, adversarial structure of legal procedure).