A Project coordinated by IIIA.
Principal investigator:
Collaborating organisations:
Instituto de Filosofía (IFS-CSIC)
Universitat Politècnica de València (UPV)
Universidad Rey Juan Carlos (URJC)
Funding entity:
Funding call:
Project #:
Total funding amount:
IIIA funding amount:
Duration:
Extension date:
Artificial Intelligence (AI) and digital technologies have become such an integral part of our lives that interacting with semi-autonomous agents is already a reality. One of the main challenges with today’s digital transition, however, is putting human needs and values at the centre of the digital revolution. Ensuring AI is trustworthy and reliable —that it fulfils human needs and respects human values— is the core focus of Value-Awareness Engineering (VAE). VAE addresses the digital transition topic of the Agencia Estatal de Investigación call (Proyectos Estratégicos Orientados a la Transición Ecológica y a la Transición Digital 2021) by aiming at developing innovative and ground breaking AI research for engineering value-aware socio-technical systems in response to the specific request of “putting people and their digital rights at the centre of the [digital transition] process”.
VAE builds on agreement technologies to develop cutting-edge value-aware AI systems; that is, software systems that reason about human values to align with them. We argue that just as values guide our morality, values can guide the morality of software agents and systems, making machine morality a reality. However, while humans are suited and used to perform moral thinking, AI has not reached that state of maturity yet. Unfortunately, current AI systems are not value-aware. Recently, OpenAI’s GPT3 encouraged a person to kill himself, so violating the moral obligation of not promoting harm. Amazon’s Alexa advised a 10-year-old girl to touch a penny to a live plug socket despite the risk. We argue that AI systems should have moral capabilities, which includes the capacity to be self-aware, i.e. not only react but also justify behaviour in moral terms.
Drawing insights from moral philosophy, which is the discipline concerned with what is morally good and bad and morally right and wrong, this project will advance the design of AI systems according to European fundamental rights and principles. As outlined in EU’s proposal for a regulation laying down harmonised rules on Artificial Intelligence, building trustworthy AI is a strategic priority. By enabling the engineering of values within AI systems, VAE will increase AI trustworthiness. Software systems that can explain how their behaviour is ethically grounded will be perceived as more acceptable by people and will promote the development of the next-generation AI-assisted services and products.
Main goal. Engineering value-aware AI systems that understand human values, abides by them, and explain their own behaviour or understand the behaviour of others in terms of them.
To achieve its goal, VAE is designed around a number of objectives, which we present below.
Objective 1. Understanding value-awareness in AI and developing the foundational framework for modelling value systems that enable value awareness in AI. This develops a computational framework for value specification that takes into account considerations from “value theory” in philosophy and social science. This formal approach to values is necessary for building reasoning mechanisms about values.
Objective 2. Developing tools that enable value-awareness in human and software agents. This allows these agents to be aware of their values and the values of others, and to make decisions that ensure their behaviour is aligned with individual and/or collective values.
Objective 3. Developing tools that enable value-awareness in systems of interacting human and software agents, i.e. socio-technical systems. This enables self-governance in systems that maximises value-alignment of system behaviour.
Objective 4. Validating VAE’s hypothesis and demonstrating the adequacy of the developed VAE tools in a variety of real-life application domains, from agricultural cooperatives, mutual aid networks, to conflict resolution. This strengthens VAE’s impact through software
demonstrators that address real-world problems of social and economic relevance.
Three institutions will join forces and bring in their specific knowledge and experience through three subprojects that complement each other, so as to achieve the aforementioned major challenges. The first develops foundational models, following an interdisciplinary approach that integrates AI and ethics, as well as computational models of value-driven self-governance of normative systems. The second focuses on value-based normative reasoning and agreement processes at the agent level, while the third concerns value-aligned coordination, adaptation, and explainability mainly at multiagent (i.e. system) level.
While each sub-project is led by a different partner, all three research teams contribute, according to their expertise, to each subproject.
The subprojects and their IPs are presented next.
Subproject 1: Foundations for value awareness
Project #: TED2021-131295B-C31
PI 1: Nardine Osman (IIIA-CSIC)
PI 2: Sara Degli-Esposti (IFS-CSIC)
Subproject 2: Value-aware decision making
Project #: TED2021-131295B-C32
PI 1: Vicent Botti Navarro (UPV)
PI 2: Natalia Criado Pacheco (UPV)
Subproject 3: Value-aware systems
Project #: TED2021-131295B-C33
PI 1: Sascha Ossowski (URJC)
PI 2: Holger Billhardt (URJC)