VAE
VAE

VAE
VAE
 : 
Value-awareness engineering
Value-awareness engineering

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

Instituto de Filosofía (IFS-CSIC)

Universitat Politècnica de València (UPV)

Universidad Rey Juan Carlos (URJC)

Instituto de Filosofía (IFS-CSIC)

Universitat Politècnica de València (UPV)

Universidad Rey Juan Carlos (URJC)

Funding entity:

MCIN/AEI /10.13039/501100011033 and the European Union Next Generation EU (PRTR)
MCIN/AEI /10.13039/501100011033 and the European Union Next Generation EU (PRTR)

Funding call:

2021 Call for Strategic projects aimed at the ecological and digital transition
2021 Call for Strategic projects aimed at the ecological and digital transition

Funding call URL:

Project #:

TED2021-131295B-C31
TED2021-131295B-C31

Total funding amount:

1.967.650,00€
1.967.650,00€

IIIA funding amount:

667.460,00€
667.460,00€

Duration:

01/Dec/2022
01/Dec/2022
30/Nov/2024
30/Nov/2024

Extension date:

Artificial Intelligence (AI) and digital technologies have become such an integral part of our lives that interacting with semi-autonomous agents is already a reality. One of the main challenges with today’s digital transition, however, is putting human needs and values at the centre of the digital revolution. Ensuring AI is trustworthy and reliable —that it fulfils human needs and respects human values— is the core focus of Value-Awareness Engineering (VAE). VAE addresses the digital transition topic of the Agencia Estatal de Investigación call (Proyectos Estratégicos Orientados a la Transición Ecológica y a la Transición Digital 2021) by aiming at developing innovative and ground breaking AI research for engineering value-aware socio-technical systems in response to the specific request of “putting people and their digital rights at the centre of the [digital transition] process”.

VAE builds on agreement technologies to develop cutting-edge value-aware AI systems; that is, software systems that reason about human values to align with them. We argue that just as values guide our morality, values can guide the morality of software agents and systems, making machine morality a reality. However, while humans are suited and used to perform moral thinking, AI has not reached that state of maturity yet. Unfortunately, current AI systems are not value-aware. Recently, OpenAI’s GPT3 encouraged a person to kill himself, so violating the moral obligation of not promoting harm. Amazon’s Alexa advised a 10-year-old girl to touch a penny to a live plug socket despite the risk. We argue that AI systems should have moral capabilities, which includes the capacity to be self-aware, i.e. not only react but also justify behaviour in moral terms.

Drawing insights from moral philosophy, which is the discipline concerned with what is morally good and bad and morally right and wrong, this project will advance the design of AI systems according to European fundamental rights and principles. As outlined in EU’s proposal for a regulation laying down harmonised rules on Artificial Intelligence, building trustworthy AI is a strategic priority. By enabling the engineering of values within AI systems, VAE will increase AI trustworthiness. Software systems that can explain how their behaviour is ethically grounded will be perceived as more acceptable by people and will promote the development of the next-generation AI-assisted services and products.

Main goal. Engineering value-aware AI systems that understand human values, abides by them, and explain their own behaviour or understand the behaviour of others in terms of them.

To achieve its goal, VAE is designed around a number of objectives, which we present below.

Objective 1. Understanding value-awareness in AI and developing the foundational framework for modelling value systems that enable value awareness in AI. This develops a computational framework for value specification that takes into account considerations from “value theory” in philosophy and social science. This formal approach to values is necessary for building reasoning mechanisms about values.

Objective 2. Developing tools that enable value-awareness in human and software agents. This allows these agents to be aware of their values and the values of others, and to make decisions that ensure their behaviour is aligned with individual and/or collective values.

Objective 3. Developing tools that enable value-awareness in systems of interacting human and software agents, i.e. socio-technical systems. This enables self-governance in systems that maximises value-alignment of system behaviour.

Objective 4. Validating VAE’s hypothesis and demonstrating the adequacy of the developed VAE tools in a variety of real-life application domains, from agricultural cooperatives, mutual aid networks, to conflict resolution. This strengthens VAE’s impact through software
demonstrators that address real-world problems of social and economic relevance.

Three institutions will join forces and bring in their specific knowledge and experience through three subprojects that complement each other, so as to achieve the aforementioned major challenges. The first develops foundational models, following an interdisciplinary approach that integrates AI and ethics, as well as computational models of value-driven self-governance of normative systems. The second focuses on value-based normative reasoning and agreement processes at the agent level, while the third concerns value-aligned coordination, adaptation, and explainability mainly at multiagent (i.e. system) level. 

While each sub-project is led by a different partner, all three research teams contribute, according to their expertise, to each subproject.

The subprojects and their IPs are presented next.

Subproject 1: Foundations for value awareness
Project #: TED2021-131295B-C31
PI 1: Nardine Osman (IIIA-CSIC)
PI 2: Sara Degli-Esposti (IFS-CSIC)

Subproject 2: Value-aware decision making
Project #: TED2021-131295B-C32
PI 1: Vicent Botti Navarro (UPV)
PI 2: Natalia Criado Pacheco (UPV)

Subproject 3: Value-aware systems
Project #: TED2021-131295B-C33
PI 1: Sascha Ossowski (URJC)
PI 2: Holger Billhardt (URJC)

Artificial Intelligence (AI) and digital technologies have become such an integral part of our lives that interacting with semi-autonomous agents is already a reality. One of the main challenges with today’s digital transition, however, is putting human needs and values at the centre of the digital revolution. Ensuring AI is trustworthy and reliable —that it fulfils human needs and respects human values— is the core focus of Value-Awareness Engineering (VAE). VAE addresses the digital transition topic of the Agencia Estatal de Investigación call (Proyectos Estratégicos Orientados a la Transición Ecológica y a la Transición Digital 2021) by aiming at developing innovative and ground breaking AI research for engineering value-aware socio-technical systems in response to the specific request of “putting people and their digital rights at the centre of the [digital transition] process”.

VAE builds on agreement technologies to develop cutting-edge value-aware AI systems; that is, software systems that reason about human values to align with them. We argue that just as values guide our morality, values can guide the morality of software agents and systems, making machine morality a reality. However, while humans are suited and used to perform moral thinking, AI has not reached that state of maturity yet. Unfortunately, current AI systems are not value-aware. Recently, OpenAI’s GPT3 encouraged a person to kill himself, so violating the moral obligation of not promoting harm. Amazon’s Alexa advised a 10-year-old girl to touch a penny to a live plug socket despite the risk. We argue that AI systems should have moral capabilities, which includes the capacity to be self-aware, i.e. not only react but also justify behaviour in moral terms.

Drawing insights from moral philosophy, which is the discipline concerned with what is morally good and bad and morally right and wrong, this project will advance the design of AI systems according to European fundamental rights and principles. As outlined in EU’s proposal for a regulation laying down harmonised rules on Artificial Intelligence, building trustworthy AI is a strategic priority. By enabling the engineering of values within AI systems, VAE will increase AI trustworthiness. Software systems that can explain how their behaviour is ethically grounded will be perceived as more acceptable by people and will promote the development of the next-generation AI-assisted services and products.

Main goal. Engineering value-aware AI systems that understand human values, abides by them, and explain their own behaviour or understand the behaviour of others in terms of them.

To achieve its goal, VAE is designed around a number of objectives, which we present below.

Objective 1. Understanding value-awareness in AI and developing the foundational framework for modelling value systems that enable value awareness in AI. This develops a computational framework for value specification that takes into account considerations from “value theory” in philosophy and social science. This formal approach to values is necessary for building reasoning mechanisms about values.

Objective 2. Developing tools that enable value-awareness in human and software agents. This allows these agents to be aware of their values and the values of others, and to make decisions that ensure their behaviour is aligned with individual and/or collective values.

Objective 3. Developing tools that enable value-awareness in systems of interacting human and software agents, i.e. socio-technical systems. This enables self-governance in systems that maximises value-alignment of system behaviour.

Objective 4. Validating VAE’s hypothesis and demonstrating the adequacy of the developed VAE tools in a variety of real-life application domains, from agricultural cooperatives, mutual aid networks, to conflict resolution. This strengthens VAE’s impact through software
demonstrators that address real-world problems of social and economic relevance.

Three institutions will join forces and bring in their specific knowledge and experience through three subprojects that complement each other, so as to achieve the aforementioned major challenges. The first develops foundational models, following an interdisciplinary approach that integrates AI and ethics, as well as computational models of value-driven self-governance of normative systems. The second focuses on value-based normative reasoning and agreement processes at the agent level, while the third concerns value-aligned coordination, adaptation, and explainability mainly at multiagent (i.e. system) level. 

While each sub-project is led by a different partner, all three research teams contribute, according to their expertise, to each subproject.

The subprojects and their IPs are presented next.

Subproject 1: Foundations for value awareness
Project #: TED2021-131295B-C31
PI 1: Nardine Osman (IIIA-CSIC)
PI 2: Sara Degli-Esposti (IFS-CSIC)

Subproject 2: Value-aware decision making
Project #: TED2021-131295B-C32
PI 1: Vicent Botti Navarro (UPV)
PI 2: Natalia Criado Pacheco (UPV)

Subproject 3: Value-aware systems
Project #: TED2021-131295B-C33
PI 1: Sascha Ossowski (URJC)
PI 2: Holger Billhardt (URJC)

2024
Roger Xavier Lera Leri,  Enrico Liscio,  Filippo Bistaffa,  Catholijn M. Jonker,  Maite Lopez-Sanchez,  Pradeep K. Murukannaiah,  Juan A. Rodríguez-Aguilar,  & Francisco Salas-Molina (2024). Aggregating value systems for decision support. Knowledge-Based Systems, 287, 111453. https://doi.org/10.1016/j.knosys.2024.111453. [BibTeX]  [PDF]
Thiago Nardine Osman,  & Marco Schorlemmer (2024). Is This a Violation? Learning and Understanding Norm Violations in Online Communities. Artificial Intelligence, 327. https://doi.org/10.1016/j.artint.2023.104058. [BibTeX]
M Serramia Amoros,  M Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  & S Moretti (2024). Value alignment in participatory budgeting. Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems . [BibTeX]  [PDF]
2023
Jordi Ganzer-Ripoll,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2023). A model to support collective reasoning: Formalization, analysis and computational assessment. Journal of Artificial Intelligence Research. [BibTeX]  [PDF]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2023). A multi-scenario approach to continuously learn and understand norm violations. Autonomous Agents and Multi-Agent Systems, 37, 38. https://doi.org/10.1007/s10458-023-09619-4. [BibTeX]
Dave de Jonge (2023). A New Bargaining Solution for Finite Offer Spaces. Applied Intelligence, 53, 28310--28332. https://doi.org/10.1007/s10489-023-05009-1. [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2023). Building rankings encompassing multiple criteria to support qualitative decision-making. Information Sciences, 631, 288-304. [BibTeX]  [PDF]
Thiago Freitas Santos,  Stephen Cranefield,  Bastin Tony Roy Savarimuthu,  Nardine Osman,  & Marco Schorlemmer (2023). Cross-community Adapter Learning {(CAL)}to Understand the Evolving Meanings of Norm Violation. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, {IJCAI}2023, 19th-25th August 2023, Macao, SAR, China (pp. 109--117). ijcai.org. https://doi.org/10.24963/IJCAI.2023/13. [BibTeX]
Marc Serramia,  Manel Rodriguez-Soto,  Maite Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  Filippo Bistaffa,  Paula Boddington,  Michael Wooldridge,  & Carlos Ansotegui (2023). Encoding Ethics to Compute Value-Aligned Norms. Minds and Machines, 1--30. [BibTeX]  [PDF]
Enrico Liscio,  Roger Lera-Leri,  Filippo Bistaffa,  Roel I. J. Dobbe,  Catholijn M. Jonker,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Pradeep K. Murukannaiah (2023). Inferring Values via Hybrid Intelligence. Proceedings of the 2nd International Conference on Hybrid Human Artificial Intelligence (HHAI) (pp. In press). [BibTeX]  [PDF]
Athina Georgara,  Raman Kazhamiakin,  Ornella Mich,  Alessio Palmero Approsio,  Jean-Christoph Pazzaglia,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2023). The AI4Citizen pilot: Pipelining AI-based technologies to support school-work alternation programmes. Applied Intelligence. https://doi.org/10.1007/s10489-023-04758-3. [BibTeX]  [PDF]
Enrico Liscio,  Roger Xavier Lera Leri,  Filippo Bistaffa,  Roel I. J. Dobbe,  Catholijn M. Jonker,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Pradeep K. Murukannaiah (2023). Value Inference in Sociotechnical Systems. Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 1774-1780). [BibTeX]  [PDF]
2022
Pol Alexandra Popartan,  Toni Perello-Moragues,  Pablo Noriega,  David Sauri,  Manel Poch,  & Maria Molinos-Senante (2022). Agent-based modelling to simulate the socio-economic effects of implementing time-of-use tariffs for domestic water. Sustainable Cities and Society, 86, 104118. https://doi.org/10.1016/j.scs.2022.104118. [BibTeX]  [PDF]
Pompeu Casanovas,  & Pablo Noriega (2022). Dilemmas in Legal Governance. JOAL, 10, 1-.20. https://doi.org/https://ojs.law.cornell.edu/index.php/joal/article/view/122. [BibTeX]  [PDF]
Pablo Noriega,  & Pompeu Casanovas (2022). La Gobernanza de los Sistemas Artificiales Inteligentes. Olivia Velarde Hermida, & Manuel Martin Serrano (Eds.), Mirando hacia el futuro. Cambios sociohistoricos vinculados a la virtualizacion (pp 115-143). Centro de Investigaciones Sociologicas. [BibTeX]  [PDF]
Dave de Jonge
Contract Researcher
Thiago Freitas Dos Santos
PhD Student
Joan Jené
Engineer
Phone Ext. 431837

Lissette Lemus del Cueto
Contract Engineer
Alejandra López de Aberasturi Gómez
PhD Student
Ramon Lopez de Mantaras
Adjunct Professor Ad Honorem
Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 431821

Nieves Montes
PhD Student
Pablo Noriega
Científico Ad Honorem
Nardine Osman
Tenured Scientist
Phone Ext. 431826

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 431861

Bruno Rosell
Contract Engineer
Carles Sierra
Research Professor
Phone Ext. 431801