Formalization of values preferences for AI agents to address simultaneous multiple layered contexts in which they are situated.

This project aims at developing an explicit representation and interpretation of stakeholders’ socio-ethical values, conventions, and norms, and to incorporate them in AIS component reasoning and decision-making processes. By doing so, we can enable ethics-by-design approaches in which the agents take into consideration the wider socio-ethical values that they are meant to fulfil as part of the socio-ethical systems that they belong to.

There is extensive literature on the formalisation of value systems, norms, and conventions, but most works cover only one level of abstraction at one end of the spectrum – either abstract and universal or concrete and specific to a particular scenario – and in a single interaction context. However, the real social world is much more complex with multiple overlapping contexts that comprise extended sequences of events within contexts, while events can individually and severally have effects across contexts. There are multiple – not fully compatible – value theories, such as Self-Determination Theory or the Schwartz Value System. These are also abstract in nature and not directly applicable to an agent’s behaviour. A key factor in understanding how values affect actions is that preferences over values are context-dependent, so certain values are more important than others according to the circumstances. This becomes more complicated when we consider that an agent can be in more than one context at the same time and thus have to handle different, possibly conflicting, preference orderings. Lastly, there is the mutual interaction of values with context to address: the context affects the value preferences, but also value preferences affect the context. Consequently, to formalise value preferences for agents, we need to identify and define a suitable approximation of a context for the purposes of this microproject.

In this microproject, we will develop: 1) A novel agent architecture that allows agents to be aware of the values/norms/conventions for each of the social contexts related to their interactions by creating separate explicit representations for each context, and then utilising these context representations in reasoning and decision making to align the resulting behaviour to the social values of the contexts in which the agent is situated. 2) A methodology to develop a multi-level, multi-contextual model to formalise a connection from abstract, universal social values to concrete behaviour of agents in a particular social context. More concretely, we aim to create a computational representation of nested, overlapping (eventually contradictory) social contexts, where the set of values and the preference function over them (and their respective norms and conventions) of a given context are properly derived from abstract values in higher-level, more general (super) contexts, up to universal, abstract values.
We will demonstrate the practical feasibility of the above two contributions by developing a proof-of-concept demonstrator in the second of the 2 planned papers (the first, will focus on the conceptualisation).

Output

The foundation for addressing the problems outlined above is a novel agent architecture that we sketch in the project description. However, this needs to be tied to an agent-based simulation platform, within which we will apply the methodology and test the agent architecture.

Evaluation of the methodology and the architecture will form the primary technical outputs and provide the core content for the publications discussed below. Our approach to evaluation will rely upon artificial scenarios that show how the methodology delivers a value preference model that reflects stakeholder requirements and then how that model functions in the agent architecture. Our aim in using artificial testing scenarios rather than simplified real-world scenarios is to: (i) de-risk the project by focusing on function rather than scenario modelling (ii) concentrate on correct coverage of value-preference determined behaviour (iii) provide confidence in the overall capability of the model and its implementation (iv) facilitate reproducibility of methodology and architecture.

We are going to implement our agents in a simulation of an artificial population as a proof-of-concept demonstrator.

We plan 2 publications, one focusing on the principles and conceptualisation targeted at AAMAS 2024 and one on the demonstrator targeted at ECAI 2024. These additional venues are identified as back-ups: AAAI, AAMAS, IJCAI, KR, ECAI, AIES, HHAI, and workshops like MABS, COINE.

Project Partners

  • Instituto Superior Técnico (IST), Rui Prada
  • Umeå University (UMU), Juan Carlos Nieves
  • Umeå University (UMU), Andreas Theodorou
  • Umeå University (UMU), Virginia Dignum
  • Universitat Politècnica de Catalunya (UPC), Javier Vázquez-Salceda
  • Universitat Politècnica de Catalunya (UPC), Sergio Álvarez-Napagao
  • University of Bath, Marina De Vos
  • University of Bath, Julian Padget

Primary Contact

Rui Prada, Instituto Superior Técnico (IST)