CA | ES | EN
Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman


In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.

Ethics

Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pilar Dellunde
Adjunct Scientist

Ramon Lopez de Mantaras
Adjunct Professor Ad Honorem
Phone Ext. 431828

Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 431821

Pablo Noriega
Científico Ad Honorem
Phone Ext. 431829

Nardine Osman
Tenured Scientist
Phone Ext. 431826

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 431861

Marco Schorlemmer
Tenured Scientist
Phone Ext. 431858

Carles Sierra
Research Professor
Phone Ext. 431801

In Press
Nardine Osman,  Ronald Chenu-Abente,  Qiang Shen,  Carles Sierra,  & Fausto Giunchiglia (In Press). Empowering Users in Online Open Communities. SN Computer Science. [BibTeX]  [PDF]
2024
Roger Xavier Lera Leri,  Enrico Liscio,  Filippo Bistaffa,  Catholijn M. Jonker,  Maite Lopez-Sanchez,  Pradeep K. Murukannaiah,  Juan A. Rodríguez-Aguilar,  & Francisco Salas-Molina (2024). Aggregating value systems for decision support. Knowledge-Based Systems, 287, 111453. https://doi.org/10.1016/j.knosys.2024.111453. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Juan A. Rodríguez-Aguilar,  & Maite López-Sánchez (2024). An Analytical Study of Utility Functions in Multi-Objective Reinforcement Learning. The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) . [BibTeX]  [PDF]
Adrià Fenoy,  Filippo Bistaffa,  & Alessandro Farinelli (2024). An attention model for the formation of collectives in real-world domains. Artificial Intelligence, 328, 104064. https://doi.org/10.1016/j.artint.2023.104064. [BibTeX]  [PDF]
Roger Xavier Lera Leri,  & Filippo Bistaffa (2024). A Robust and Scalable Approach to Meet User Preferences in Research Project Planning. Ulle Endriss, Francisco S. Melo, Kerstin Bach, Alberto José Bugarín Diaz, Jose Maria Alonso Moral, Senén Barro, & Fredrik Heintz (Eds.), ECAI 2024 - 27th European Conference on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain - Including 13th Conference on Prestigious Applications of Intelligent Systems (PAIS 2024) (pp. 4516--4523). IOS Press. https://doi.org/10.3233/FAIA241043. [BibTeX]  [PDF]
Dave de Jonge,  & Laura Rodriguez Cima (2024). Attila: A Negotiating Agent for the Game of Diplomacy, Based on Purely Symbolic A.I. Mehdi Dastani, Jaime Simão Sichman, Natasha Alechina, & Virginia Dignum (Eds.), Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024, Auckland, New Zealand, May 6-10, 2024 (pp. 2234--2236). {ACM}. https://doi.org/10.5555/3635637.3663118. [BibTeX]  [PDF]
Dave de Jonge,  & Laura Rodriguez Cima (2024). Attila: A Negotiating Diplomacy Player Based on Purely Symbolic A.I.. Ryuta Arisaka, Víctor Sánchez-Anguix, Sebastian Stein, Reyhan Aydogan, Leon Torre, & Takayuki Ito (Eds.), PRIMA 2024: Principles and Practice of Multi-Agent Systems - 25th International Conference, Kyoto, Japan, November 18-24, 2024, Proceedings (pp. 3--18). Springer. https://doi.org/10.1007/978-3-031-77367-9\_1. [BibTeX]  [PDF]
Jianglin Qiao,  Dongmo Zhang,  Dave de Jonge,  Simeon Simoff,  & Carles Sierra (2024). Automated Negotiation Mechanisms for Autonomous Vehicles at Intersections. Rafik Hadfi, Patricia Anthony, Alok Sharma, Takayuki Ito, & Quan Bai (Eds.), PRICAI 2024: Trends in Artificial Intelligence - 21st Pacific Rim International Conference on Artificial Intelligence, PRICAI 2024, Kyoto, Japan, November 18-24, 2024, Proceedings, Part IV (pp. 271--283). Springer. https://doi.org/10.1007/978-981-96-0125-7\_22. [BibTeX]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Can Interpretability Layouts Influence Human Perception of Offensive Sentences?. Davide Calvaresi, Amro Najjar, Andrea Omicini, Reyhan Aydogan, Rachele Carli, Giovanni Ciatto, Joris Hulstijn, & Kary Främling (Eds.), Explainable and Transparent AI and Multi-Agent Systems - 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6-10, 2024, Revised Selected Papers (pp. 39--57). Springer. https://doi.org/10.1007/978-3-031-70074-3_3. [BibTeX]
Dimitra Bourou,  Marco Schorlemmer,  Enric Plaza,  & Marcell Veiner (2024). Characterising cognitively useful blends: Formalising governing principles of conceptual blending. Cognitive Systems Research, 86, 101245. https://doi.org/10.1016/j.cogsys.2024.101245. [BibTeX]  [PDF]
Marco Schorlemmer (2024). Cultivar la "correcció dels noms" en parlar de la intel·ligència artificial. Qüestions de Vida Cristiana, 278, 71--80. [BibTeX]
Núria Vallès-Peris,  & Miquel Domènech (2024). Digital Citizenship at School: Democracy, Pragmatism and Rri. Technology in Society, 76. https://doi.org/10.2139/ssrn.4128968. [BibTeX]
Jianglin Qiao,  Dave de Jonge,  Dongmo Zhang,  Simeon Simoff,  Carles Sierra,  & Bo Du (2024). Extended Abstract: Price of Anarchy of Traffic Assignment with Exponential Cost Functions. Mehdi Dastani, Jaime Simão Sichman, Natasha Alechina, & Virginia Dignum (Eds.), Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024, Auckland, New Zealand, May 6-10, 2024 (pp. 2842--2844). ACM. https://doi.org/10.5555/3635637.3663307. [BibTeX]
Errikos Streviniotis,  Athina Georgara,  Filippo Bistaffa,  & Georgios Chalkiadakis (2024). FairPlay: A Multi-Sided Fair Dynamic Pricing Policy for Hotels. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 22368-22376. https://doi.org/10.1609/aaai.v38i20.30243. [BibTeX]  [PDF]
Marco Schorlemmer,  Mohamad Ballout,  & Kai-Uwe Kühnberger (2024). Generating Qualitative Descriptions of Diagrams with a Transformer-Based Language Model. Jens Lemanski, Mikkel Willum Johansen, Emmanuel Manalo, Petrucio Viana, Reetu Bhattacharjee, & Richard Burns (Eds.), Diagrammatic Representation and Inference - 14th International Conference, Diagrams 2024, Münster, Germany, September 27 - October 1, 2024, Proceedings (pp. 61--75). Springer. https://doi.org/10.1007/978-3-031-71291-3_5. [BibTeX]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Is This a Violation? Learning and Understanding Norm Violations in Online Communities. Artificial Intelligence, 327. https://doi.org/10.1016/j.artint.2023.104058. [BibTeX]
Thimjo Koça,  Dave de Jonge,  & Tim Baarslag (2024). Search algorithms for automated negotiation in large domains. Annals of Mathematics and Artificial Intelligence, 92, 903--924. https://doi.org/10.1007/s10472-023-09859-w. [BibTeX]
Ignacio Huitzil,  Miguel Molina-Solana,  Juan Gómez-Romero,  Marco Schorlemmer,  Pere Garcia-Calvés,  Nardine Osman,  Josep Coll,  & Fernando Bobillo (2024). Semantic Building Information Modeling: An Empirical Evaluation of Existing Tools. Journal of Industrial Information Integration, 42, 100731. https://doi.org/10.1016/j.jii.2024.100731. [BibTeX]
Dave de Jonge (2024). Theoretical Properties of the MiCRO Negotiation Strategy. Autonomous Agents and Multi-Agent Systems, 38. https://doi.org/10.1007/s10458-024-09678-1. [BibTeX]  [PDF]
Nardine Osman,  Bruno Rosell,  Andrew Koster,  Marco Schorlemmer,  Carles Sierra,  & Jordi Sabater-Mir (2024). The uHelp Application. Nardine Osman (Eds.), Electronic Institutions: Applications to uHelp, WeCurate and PeerLearn (pp 61--79). Springer. https://doi.org/10.1007/978-3-319-65605-2_3. [BibTeX]
Manel Rodríguez Soto,  Nardine Osman,  Carles Sierra,  Paula Sánchez Veja,  Rocío Cintas García,  Cristina Farriols Danes,  Montserrat García Retortillo,  & Sílvia Mínguez Maso (2024). Towards value awareness in the medical field. (pp. 8). Special Session on AI with Awareness Inside of the 16th International Conference on Agents and Artificial Intelligence (ICAART 2024). [BibTeX]  [PDF]
Manel Rodríguez Soto,  Nardine Osman,  Carles Sierra,  Nieves Montes,  Jordi Martínez Roldán,  Rocío Cintas García,  Cristina Farriols Danes,  Montserrat García Retortillo,  & Sílvia Mínguez Maso (2024). User Study Design for Identifying the Semantics of Bioethical Principles. (pp. 16). Second International Workshop on Value Engineering in Artificial Intelligence (VALE2024) at the European Conference on Artificial Intelligence (ECAI), Santiago de Compostela). [BibTeX]  [PDF]
M Serramia Amoros,  M Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  & S Moretti (2024). Value alignment in participatory budgeting. Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems . [BibTeX]  [PDF]
2023
Francisco Salas-Molina,  Filippo Bistaffa,  & Juan A. Rodríguez-Aguilar (2023). A general approach for computing a consensus in group decision making that integrates multiple ethical principles. Socio-Economic Planning Sciences, 89, 101694. https://doi.org/10.1016/j.seps.2023.101694. [BibTeX]  [PDF]
Jordi Ganzer-Ripoll,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2023). A model to support collective reasoning: Formalization, analysis and computational assessment. Journal of Artificial Intelligence Research. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & Montserrat Guillén (2023). A multidimensional review of the cash management problem. Financial Innovation, 9. [BibTeX]  [PDF]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2023). A multi-scenario approach to continuously learn and understand norm violations. Autonomous Agents and Multi-Agent Systems, 37, 38. https://doi.org/10.1007/s10458-023-09619-4. [BibTeX]
Francisco Salas-Molina,  David Pla-Santamaria,  & Juan A. Rodríguez-Aguilar (2023). An analytic derivation of the efficient frontier in biobjective cash management and its implications for policies. Annals of Operations Research. [BibTeX]  [PDF]
Dave de Jonge (2023). A New Bargaining Solution for Finite Offer Spaces. Applied Intelligence, 53, 28310--28332. https://doi.org/10.1007/s10489-023-05009-1. [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2023). An Image-Schematic Analysis of Hasse and Euler Diagrams. Maria M. Hedblom, & Oliver Kutz (Eds.), Proceedings of The Seventh Image Schema Day co-located with The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023), Rhodes, Greece, September 2nd, 2023 . CEUR-WS.org. https://ceur-ws.org/Vol-3511/paper\_05.pdf. [BibTeX]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2023). Building rankings encompassing multiple criteria to support qualitative decision-making. Information Sciences, 631, 288-304. [BibTeX]  [PDF]
Núria Vallès-Peris,  & Miquel Domènech (2023). Care robots for the common good: ethics as politics. Humanities & Social Sciences Communications, 10, 345. https://doi.org/10.1057/s41599-023-01850-4. [BibTeX]
Núria Vallès-Peris,  & Miquel Domènech (2023). Caring in the in-between: a proposal to introduce responsible AI and robotics to healthcare. AI and Society, 38, 1685--1695. https://doi.org/10.1007/s00146-021-01330-w. [BibTeX]
Pompeu; Casanovas,  & Pablo Noriega (2023). Cómo regular lo altamente complejo. Nuevos Diálogos, 2, 25–31. https://nuevosdialogos.unam.mx/download/38/02-inteligencia-artificial/3378/como-regular-lo-altamente-complejo.pdf. [BibTeX]  [PDF]
Thiago Freitas Santos,  Stephen Cranefield,  Bastin Tony Roy Savarimuthu,  Nardine Osman,  & Marco Schorlemmer (2023). Cross-community Adapter Learning {(CAL)}to Understand the Evolving Meanings of Norm Violation. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, {IJCAI}2023, 19th-25th August 2023, Macao, SAR, China (pp. 109--117). ijcai.org. https://doi.org/10.24963/IJCAI.2023/13. [BibTeX]
Marc Serramia,  Manel Rodriguez-Soto,  Maite Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  Filippo Bistaffa,  Paula Boddington,  Michael Wooldridge,  & Carlos Ansotegui (2023). Encoding Ethics to Compute Value-Aligned Norms. Minds and Machines, 1--30. [BibTeX]  [PDF]
Filippo Bistaffa (2023). Faster Exact MPE and Constrained Optimization with Deterministic Finite State Automata. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, {IJCAI}2023, 19th-25th August 2023, Macao, SAR, China (pp. 1884--1892). ijcai.org. https://doi.org/10.24963/IJCAI.2023/209. [BibTeX]
Maria Verdaguer,  Núria Vallès-Peris,  Xavier Busquet-Duran,  Eduard Moreno-Gabriel,  Patricia Beroiz,  Antonia {Arreciado Marañón},  Maria Feijoo-Cid,  Miquel Domènech,  Lupicinio Iñiguez-Rueda,  Glòria Cantarell,  & Pere Torán-Monserrat (2023). Implementation of Assisted Dying in Catalonia: Impact on Professionals and Development of Good Practices. Protocol for a Qualitative Study. International Journal of Qualitative Methods, 22, 1--11. https://doi.org/10.1177/16094069231186133. [BibTeX]
Celeste Veronese,  Daniele Meli,  Filippo Bistaffa,  Manel Rodríguez Soto,  Alessandro Farinelli,  & Juan A. Rodríguez-Aguilar (2023). Inductive Logic Programming For Transparent Alignment With Multiple Moral Values. . 2nd International Workshop on Emerging Ethical Aspects of AI (BEWARE-23). [BibTeX]  [PDF]
Enrico Liscio,  Roger Lera-Leri,  Filippo Bistaffa,  Roel I. J. Dobbe,  Catholijn M. Jonker,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Pradeep K. Murukannaiah (2023). Inferring Values via Hybrid Intelligence. Proceedings of the 2nd International Conference on Hybrid Human Artificial Intelligence (HHAI) (pp. In press). [BibTeX]  [PDF]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)