VALAWAI
VALAWAI

VALAWAI
VALAWAI
 : 
Value-Aware Artificial Intelligence
Value-Aware Artificial Intelligence

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

FUNDACIO INSTITUT HOSPITAL DEL MAR D INVESTIGACIONS MEDIQUES

FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA

UNIVERSITEIT GENT

SONY EUROPE BV

STUDIO STELLUTI

FUNDACIO INSTITUT HOSPITAL DEL MAR D INVESTIGACIONS MEDIQUES

FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA

UNIVERSITEIT GENT

SONY EUROPE BV

STUDIO STELLUTI

Funding entity:

European Commission
European Commission

Funding call:

HORIZON-EIC-2021-PATHFINDERCHALLENGES-01-01 - Awareness Inside
HORIZON-EIC-2021-PATHFINDERCHALLENGES-01-01 - Awareness Inside

Funding call URL:

Project #:

101070930
101070930

Total funding amount:

3.926.807,26€
3.926.807,26€

IIIA funding amount:

975.669,09€
975.669,09€

Duration:

01/Oct/2022
01/Oct/2022
30/Sep/2026
30/Sep/2026

Extension date:

By Value-Aware AI we mean AI that includes a component performing the same function as human moral consciousness, namely the capacity to acquire and maintain a value system, use this value system to decide whether certain actions are morally acceptable and be aware of the value systems of its users so as to understand the intent and motivation of their actions and to properly and correctly engage with them.  

The VALAWAI project will develop a toolbox to build Value-Aware AI resting on two pillars both grounded in science: An architecture for consciousness inspired by the Global Neuronal Workspace model developed on the basis of neurophysiological evidence and psychological data, and a foundational framework for moral decision-making based on psychology, social cognition and social brain science.  

The project will demonstrate the utility of Value-Aware AI in three application areas for which a moral dimension urgently needs to be included: (i) social media, where we see many negative side effects such as disinformation, polarisation, and the instigation of asocial and immoral behavior, (ii) social robots, which are designed to be helpful or influence human behavior in positive ways but potentially enable manipulation, deceit and harmful behavior, and (iii) medical protocols, where VALAWAI tries to ensure medical decision making is value aligned.  

The project contributes to the general goal of making EU-based AI more competitive by being more reliable, robust, ethically-guided, explainable and hence trustworthy. It does not make new proposals for guidelines and regulations (for which there is already considerable effort) but advances the state of the art in core AI technology so that ethics is embedded inside applications making them grounded in universal, European and personal values.

By Value-Aware AI we mean AI that includes a component performing the same function as human moral consciousness, namely the capacity to acquire and maintain a value system, use this value system to decide whether certain actions are morally acceptable and be aware of the value systems of its users so as to understand the intent and motivation of their actions and to properly and correctly engage with them.  

The VALAWAI project will develop a toolbox to build Value-Aware AI resting on two pillars both grounded in science: An architecture for consciousness inspired by the Global Neuronal Workspace model developed on the basis of neurophysiological evidence and psychological data, and a foundational framework for moral decision-making based on psychology, social cognition and social brain science.  

The project will demonstrate the utility of Value-Aware AI in three application areas for which a moral dimension urgently needs to be included: (i) social media, where we see many negative side effects such as disinformation, polarisation, and the instigation of asocial and immoral behavior, (ii) social robots, which are designed to be helpful or influence human behavior in positive ways but potentially enable manipulation, deceit and harmful behavior, and (iii) medical protocols, where VALAWAI tries to ensure medical decision making is value aligned.  

The project contributes to the general goal of making EU-based AI more competitive by being more reliable, robust, ethically-guided, explainable and hence trustworthy. It does not make new proposals for guidelines and regulations (for which there is already considerable effort) but advances the state of the art in core AI technology so that ethics is embedded inside applications making them grounded in universal, European and personal values.

2024
Roger Xavier Lera Leri,  Enrico Liscio,  Filippo Bistaffa,  Catholijn M. Jonker,  Maite Lopez-Sanchez,  Pradeep K. Murukannaiah,  Juan A. Rodríguez-Aguilar,  & Francisco Salas-Molina (2024). Aggregating value systems for decision support. Knowledge-Based Systems, 287, 111453. https://doi.org/10.1016/j.knosys.2024.111453. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Juan A. Rodríguez-Aguilar,  & Maite López-Sánchez (2024). An Analytical Study of Utility Functions in Multi-Objective Reinforcement Learning. The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) . [BibTeX]  [PDF]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Can Interpretability Layouts Influence Human Perception of Offensive Sentences?. Davide Calvaresi, Amro Najjar, Andrea Omicini, Reyhan Aydogan, Rachele Carli, Giovanni Ciatto, Joris Hulstijn, & Kary Främling (Eds.), Explainable and Transparent AI and Multi-Agent Systems - 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6-10, 2024, Revised Selected Papers (pp. 39--57). Springer. https://doi.org/10.1007/978-3-031-70074-3_3. [BibTeX]
Errikos Streviniotis,  Athina Georgara,  Filippo Bistaffa,  & Georgios Chalkiadakis (2024). FairPlay: A Multi-Sided Fair Dynamic Pricing Policy for Hotels. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 22368-22376. https://doi.org/10.1609/aaai.v38i20.30243. [BibTeX]  [PDF]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Is This a Violation? Learning and Understanding Norm Violations in Online Communities. Artificial Intelligence, 327. https://doi.org/10.1016/j.artint.2023.104058. [BibTeX]
Manel Rodríguez Soto,  Nardine Osman,  Carles Sierra,  Nieves Montes,  Jordi Martínez Roldán,  Paula Sánchez Veja,  Rocío Cintas García,  Cristina Farriols Danes,  Montserrat García Retortillo,  & Sílvia Mínguez Maso (2024). User Study Design for Identifying the Semantics of Bioethical Principles. (pp. 16). Second International Workshop on Value Engineering in Artificial Intelligence (VALE2024) at the European Conference on Artificial Intelligence (ECAI), Santiago de Compostela). [BibTeX]  [PDF]
M Serramia Amoros,  M Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  & S Moretti (2024). Value alignment in participatory budgeting. Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems . [BibTeX]  [PDF]
2023
Jordi Ganzer-Ripoll,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2023). A model to support collective reasoning: Formalization, analysis and computational assessment. Journal of Artificial Intelligence Research. [BibTeX]  [PDF]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2023). A multi-scenario approach to continuously learn and understand norm violations. Autonomous Agents and Multi-Agent Systems, 37, 38. https://doi.org/10.1007/s10458-023-09619-4. [BibTeX]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2023). Building rankings encompassing multiple criteria to support qualitative decision-making. Information Sciences, 631, 288-304. [BibTeX]  [PDF]
Thiago Freitas Santos,  Stephen Cranefield,  Bastin Tony Roy Savarimuthu,  Nardine Osman,  & Marco Schorlemmer (2023). Cross-community Adapter Learning {(CAL)}to Understand the Evolving Meanings of Norm Violation. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, {IJCAI}2023, 19th-25th August 2023, Macao, SAR, China (pp. 109--117). ijcai.org. https://doi.org/10.24963/IJCAI.2023/13. [BibTeX]
Marc Serramia,  Manel Rodriguez-Soto,  Maite Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  Filippo Bistaffa,  Paula Boddington,  Michael Wooldridge,  & Carlos Ansotegui (2023). Encoding Ethics to Compute Value-Aligned Norms. Minds and Machines, 1--30. [BibTeX]  [PDF]
Enrico Liscio,  Roger Lera-Leri,  Filippo Bistaffa,  Roel I. J. Dobbe,  Catholijn M. Jonker,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Pradeep K. Murukannaiah (2023). Inferring Values via Hybrid Intelligence. Proceedings of the 2nd International Conference on Hybrid Human Artificial Intelligence (HHAI) (pp. In press). [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2023). Multi-objective reinforcement learning for designing ethical multi-agent environments. Neural Computing and Applications. https://doi.org/10.1007/s00521-023-08898-y. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Roxana Radulescu,  Juan A. Rodríguez-Aguilar,  Maite López-Sánchez,  & Ann Nowé (2023). Multi-objective reinforcement learning for guaranteeing alignment with multiple values. Adaptive and Learning Agents Workshop (AAMAS 2023) . [BibTeX]  [PDF]
Athina Georgara,  Raman Kazhamiakin,  Ornella Mich,  Alessio Palmero Approsio,  Jean-Christoph Pazzaglia,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2023). The AI4Citizen pilot: Pipelining AI-based technologies to support school-work alternation programmes. Applied Intelligence. https://doi.org/10.1007/s10489-023-04758-3. [BibTeX]  [PDF]
Enrico Liscio,  Roger Xavier Lera Leri,  Filippo Bistaffa,  Roel I. J. Dobbe,  Catholijn M. Jonker,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Pradeep K. Murukannaiah (2023). Value Inference in Sociotechnical Systems. Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 1774-1780). [BibTeX]  [PDF]
2022
Pol Vidal Lamolla,  Alexandra Popartan,  Toni Perello-Moragues,  Pablo Noriega,  David Sauri,  Manel Poch,  & Maria Molinos-Senante (2022). Agent-based modelling to simulate the socio-economic effects of implementing time-of-use tariffs for domestic water. Sustainable Cities and Society, 86, 104118. https://doi.org/10.1016/j.scs.2022.104118. [BibTeX]  [PDF]
Pablo Noriega,  Harko Verhagen,  Julian Padget,  & Mark d'Inverno (2022). Design Heuristics for Ethical Online Institutions. Nirav Ajmeri, Andreasa Morris Martin, & Bastin Tony Roy Savarimuthu (Eds.), Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XV (pp. 213--230). Springer International Publishing. [BibTeX]  [PDF]
Pompeu Casanovas,  & Pablo Noriega (2022). Dilemmas in Legal Governance. JOAL, 10, 1-.20. https://ojs.law.cornell.edu/index.php/joal/article/view/122. [BibTeX]  [PDF]
Pablo Noriega,  & Pompeu Casanovas (2022). La Gobernanza de los Sistemas Artificiales Inteligentes. Olivia Velarde Hermida, & Manuel Martin Serrano (Eds.), Mirando hacia el futuro. Cambios sociohistoricos vinculados a la virtualizacion (pp 115-143). Centro de Investigaciones Sociologicas. [BibTeX]  [PDF]
Pablo Noriega
Científico Ad Honorem
Nardine Osman
Tenured Scientist
Phone Ext. 431826

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 431861

Carles Sierra
Research Professor
Phone Ext. 431801