Humane AI Ethical Framework

In this report, the HumaneAI partners present the grounding principles of Responsible AI, namely Adaptability, Responsibility and Transparency. We then introduce the Design for Values methodology to guide the development of Responsible AI systems. We discuss how these principles can be integrated into a system development life cycle framework and finally we focus on the legal issues, in particular legal protection by design (LPbD). The chapters structure is the following:

  • Accountability
  • Responsibility
  • Transparency
  • Design for Values
  • Towards Responsible AI Development Life-Cycle
  • Legal Aspects of responsible AI

Download the report here or scroll down to read

Introduction

AI has huge potential to bring accuracy, efficiency, cost savings and speed to a whole range of human activities and to provide entirely new insights into behaviour and cognition. However, the way AI is developed and deployed for a great part determines how AI will impact our lives and societies. AI, both embedded in software systems and embodied in artefacts (e.g. robots), is everywhere.

It affects everyone, and has the capability to transform public and private organisations, and the services and products they offer. AI’s impact concerns not only the research and development directions of AI, but also how these systems are introduced into society.

There is debate concerning how the use of AI will influence labour, well-being, social interactions, healthcare, income distribution and other areas of social relevance. Dealing with these issues requires that ethical, legal, societal and economic implications are taken into account.

Accountability

Accountability refers to the requirement for the system to be able to explain and justify its decisions to users and other relevant actors. To ensure accountability, decisions should be derivable from, and explained by, the decision-making mechanisms used. It also requires that the moral values and societal norms that inform the purpose of the system as well as their operational interpretations have been elicited in an open way involving all stakeholders.

Responsibility

Responsibility refers to the role of people themselves in their relation to AI systems. As the chain of responsibility grows, means are needed to link the AI systems’ decisions to their input data and to the actions of stakeholders involved in the system’s decision. Responsibility is not just about making rules to govern intelligent machines; it is about the whole socio-technical system in which the system operates, and which encompasses people, machines and institutions.

Transparency

Transparency indicates the capability to describe, inspect and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environment, and the provenance and dynamics of the data that is used and
created by the system. Moreover, trust in the system will improve if we can ensure openness of affairs in all that is related to the system. As such, transparency is also about being explicit and open about choices and decisions concerning data sources and development processes and stakeholders. Stakeholders should also be involved in decisions about all models that use human data or affect human beings or can have other morally significant impact.

Design for Values

Design for Values is a methodological design approach that aims at making moral values part of technological design, research and development (van den Hoven, 2005). Values are typically high-level abstract concepts that are difficult to incorporate in software design. In order to design systems that are able to deal with moral values, values need to be interpreted in concrete operational rules. However, given their abstract nature, values can be interpreted in different ways. The Design for Values process ensures that the link between values and their concrete interpretations in the design and engineering of systems can be traced and evaluated.

Towards Responsible AI Development Life-Cycle

By structuring the design of an AI system in terms of high level motives and roles, specific goals, and concrete plans and actions, it becomes possible to align with both the Design for Values and Software Engineering approaches. As such, at the top level, values and non-functional requirements will inform the specification of the motives and roles of the system by making clear what is the intention of the system and its scope.

Norms will provide the (ethical-societal) boundaries for the goals of the system, which at the same time need to guarantee that functional requirements are met. Finally, the implementation of plans and actions follows a concrete platform/language instantiation of the functionalities identified by the Design for Values process while ensuring operational and physical domain requirements.

These decisions are grounded on both domain characteristics and the values of the designers and other stakeholders involved in the development process.

Legal Aspects of responsible AI

Every AI system should operate within an ethical and social framework in understandable, verifiable and justifiable ways. Such systems must in any case operate within the bounds of the rule of law, incorporating fundamental rights protection into the AI infrastructure.

That is, given AI systems are artefacts built for a given purpose, it is necessary to demand that these artefacts to stay within the realm of what is both legal and ethical, and do not learn other options by themselves.

That is, AI systems should be seen as incorporating soft ethics, i,e, ethics as post-compliant to an existing regulatory system, and used to decide on what ought and ought not to be done over and above the existing regulation (Floridi, 2018).