Research Roadmap for European Human-Centered AI

A report delivered by the HumaneAI partners that describes the necessary steps to organize a community of researchers and innovators around a research program that seeks to create AI technologies that empower humans and human society to vastly improve quality of life for all. It follows five major streams:

  • Human-in-the-Loop Machine Learning, Reasoning, and Planning
  • Multimodal Perception and Modelling
  • Human AI Interaction and Collaboration
  • Societal AI
  • AI Ethics, Law and Responsible AI

Download the report here or scroll down to read

Research Challenges for Humane AI

At the core of our concept is to develop the foundations for intelligent systems that interact and collaborate with people to enhance human abilities and empower both individuals and society. Collaboration will require that humans and AI systems work together as partners to achieve a common goal, sharing a mutual understanding of each other’s abilities and respective roles. Human-level performance in collaboration will require integration of learning, reasoning, perception, and interaction.

Humane AI must go beyond HCI challenges, to ensure the human maintains control. This includes enabling users to understand how interactions are driven (transparency) and to maintain final control over interaction with AI systems. This also includes addressing compliance with European ethical and social values as a core research problem, and not simply a boundary condition. We must seek new methods to construct compliance for European ethical, legal and cultural values are by design. This will require multidisciplinary collaboration of AI, philosophy, social science, and complex systems.

A number of fundamental gaps in knowledge and technology must be addressed in three closely related areas. The first area is learning, reasoning, and planning methods, which allow for a large degree of interactivity. To facilitate a collaboration between humans and AI systems based on trust and enhancing each other’s capabilities, intelligent systems must not only be able to provide explanations at the end of the learning or reasoning task. They also provide feedback on progress and are able to incorporate high-level human input. We refer to such novel methods as “human-in-the-loop learning, reasoning and planning”. Second, human aware interaction and collaboration will require multimodal perception of dynamic real-world environments and social settings, including the ability to build and maintain comprehensive models of environments and of humans interacting within such environments. Intelligent systems must share an understanding of a problem’s larger context to properly cooperate in developing a solution. In addition, appropriate interaction and collaboration mechanisms must be developed on both the individual and collective level

Human-in-the-Loop Machine Learning, Reasoning, and Planning

Allowing humans to not just understand and follow the learning, reasoning, and planning process of AI systems (being explainable and accountable), but also to seamlessly interact with it, guide it, and enrich it with uniquely human capabilities, knowledge about the world, and the specific user’s personal perspective.

Multimodal Perception and Modelling

Enabling AI systems to perceive and interpret complex real-world environments, human actions, and interactions situated in such environments and the related emotions, motivations, and social structures. This requires enabling AI systems to build up and maintain comprehensive models that, in their scope and level of sophistication, should strive for more human-like world understanding and include common sense knowledge that captures causality and is grounded in physical reality.

Human AI Interaction and Collaboration

Developing paradigms that allow humans and AI systems including service robots and smart environments to interact and collaborate in a way that enhances human abilities and empowers people.

Societal AI

Being able to model and understand the consequences of complex network effects in large-scale mixed communities of humans and AI systems interacting over various temporal and spatial scales. This includes the ability to balance requirements related to individual users and the common good and societal concerns.

AI Ethics, Law and Responsible AI

Ensuring that the design and use of AI is aligned with ethical principles and human values, taking into account cultural and societal context, while enabling human users to act ethically and respecting their autonomy and self-determination. This also implies that AI systems must be “under the Rule of Law”: their research design, operations and output should be contestable by those affected by their decisions, and a liability for those who put them on the market.