Reports on European Human-Centered AI

Report 1: Research Roadmap for European Human-Centered AI

This report delivered by HumaneAI partners summarizes the ideas and considerations that have emerged in the course of considering various ways to sustain and develop a large-scale Humane AI community in the absence of a Flagship in Europe.

They are intended as recommendations for the funding bodies of the Union with respect to efficient implementation of large-scale long-term research initiatives in AI and similar fields. The structure includes:

  • Micro-projects: funding a large community with limited funds
  • Case-by-case involvement of outside players
  • Cooperation with Industry
  • Industrial Co-Sponsorship of Internal Calls
  • Industrial Participation in Microprojects

Download the report here or scroll down to read

Report 2: Policy Recommendations for Research Methods on European Human-Centered AI

A report delivered by the HumaneAI partners that describes the necessary steps to organize a community of researchers and innovators around a research program that seeks to create AI technologies that empower humans and human society to vastly improve quality of life for all. It follows five major streams:

  • Human-in-the-Loop Machine Learning, Reasoning, and Planning
  • Multimodal Perception and Modelling
  • Human AI Interaction and Collaboration
  • Societal AI
  • AI Ethics, Law and Responsible AI

Download the report here or scroll down to read

Report 3: Policy Recommendations for Research Methods on European Human-Centered AI

In this report, the HumaneAI partners present the grounding principles of Responsible AI, namely Adaptability, Responsibility and Transparency. We then introduce the Design for Values methodology to guide the development of Responsible AI systems. We discuss how these principles can be integrated into a system development life cycle framework and finally we focus on the legal issues, in particular legal protection by design (LPbD). The chapters structure is the following:

  • Accountability
  • Responsibility
  • Transparency
  • Design for Values
  • Towards Responsible AI Development Life-Cycle
  • Legal Aspects of responsible AI

Download the report here or scroll down to read

Report 4: The Legal Protection Debt of Training Datasets

This report delivered by LSTS centres on the practices of ML dataset creation, curation and dissemination. It argues that, lacking the adoption of appropriate safeguards, a “Legal Protection Debt” can develop incrementally along the stages of ML pipelines. The report stresses the need for actors involved in ML pipelines to adopt a forward-looking perspective to legal compliance. This requires overcoming a siloed and segmented approach to legal liability and keen attention to the future dissemination and potential use cases of training datasets.

Download the report here or scroll down to read