CA | ES | EN
Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman


In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.

Ethics

Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pompeu Casanovas Romeu
Research Professor

Dave de Jonge
Contract Researcher
Phone Ext. 431825

Pilar Dellunde
Adjunct Scientist

Mark d'Inverno
Adjunct Scientist

Ramon Lopez de Mantaras
Adjunct Professor Ad Honorem
Phone Ext. 431828

Maite López-Sánchez
Full Professor
Phone Ext. 431821

María Vanina Martinez
Tenured Scientist
Phone Ext. 431817

Pablo Noriega
Científico Ad Honorem
Phone Ext. 431829

Nardine Osman
Tenured Scientist
Phone Ext. 431826

Enric Plaza
Research Professor
Phone Ext. 431852

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 431861

Manel Rodríguez Soto
Contract Researcher
Phone Ext. 431832

Jordi Sabater-Mir
Tenured Scientist
Phone Ext. 431856

Marco Schorlemmer
Tenured Scientist
Phone Ext. 431858

Carles Sierra
Research Professor
Phone Ext. 431801

Núria Vallès Peris
Tenured Scientist
Phone Ext. 431816

2024
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Can Interpretability Layouts Influence Human Perception of Offensive Sentences?. Davide Calvaresi, Amro Najjar, Andrea Omicini, Reyhan Aydogan, Rachele Carli, Giovanni Ciatto, Joris Hulstijn, & Kary Främling (Eds.), Explainable and Transparent AI and Multi-Agent Systems - 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6-10, 2024, Revised Selected Papers (pp. 39--57). Springer. https://doi.org/10.1007/978-3-031-70074-3_3. [BibTeX]
Thiago Freitas Santos,  Nardine Osman,  & Marco Schorlemmer (2024). Is This a Violation? Learning and Understanding Norm Violations in Online Communities. Artificial Intelligence, 327. https://doi.org/10.1016/j.artint.2023.104058. [BibTeX]
2023
Marc Serramia,  Manel Rodriguez-Soto,  Maite Lopez-Sanchez,  Juan A. Rodríguez-Aguilar,  Filippo Bistaffa,  Paula Boddington,  Michael Wooldridge,  & Carlos Ansotegui (2023). Encoding Ethics to Compute Value-Aligned Norms. Minds and Machines, 1--30. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2023). Multi-objective reinforcement learning for designing ethical multi-agent environments. Neural Computing and Applications. https://doi.org/10.1007/s00521-023-08898-y. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Roxana Radulescu,  Juan A. Rodríguez-Aguilar,  Maite López-Sánchez,  & Ann Nowé (2023). Multi-objective reinforcement learning for guaranteeing alignment with multiple values. Adaptive and Learning Agents Workshop (AAMAS 2023) . [BibTeX]  [PDF]
Pablo Noriega,  & Enric Plaza (2023). The Use of Agent-based Simulation of Public Policy Design to Study the Value Alignment Problem. Proc. AIGEL 2022 Artificial Intelligence Governance Ethics and Law Workshop 2022 . CEUR-WS. https://ceur-ws.org/Vol-3531/SPaper_10.pdf. [BibTeX]  [PDF]
2022
Pol Vidal Lamolla,  Alexandra Popartan,  Toni Perello-Moragues,  Pablo Noriega,  David Sauri,  Manel Poch,  & Maria Molinos-Senante (2022). Agent-based modelling to simulate the socio-economic effects of implementing time-of-use tariffs for domestic water. Sustainable Cities and Society, 86, 104118. https://doi.org/10.1016/j.scs.2022.104118. [BibTeX]  [PDF]
Eric Roselló-Marín,  Maite López-Sánchez,  Inmaculada Rodríguez Santiago,  Manel Rodríguez Soto,  & Juan A. Rodríguez-Aguilar (2022). An Ethical Conversational Agent to Respectfully Conduct In-Game Surveys. Artificial Intelligence Research and Development (pp 335--344). IOS Press. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Juan A. Rodríguez-Aguilar,  & Maite López-Sánchez (2022). Building Multi-Agent Environments with Theoretical Guarantees on the Learning of Ethical Policies. . Adaptive and Learning Agents Workshop at AAMAS 2022 (ALA 2022). [BibTeX]  [PDF]
Pablo Noriega,  Harko Verhagen,  Julian Padget,  & Mark d'Inverno (2022). Design Heuristics for Ethical Online Institutions. Nirav Ajmeri, Andreasa Morris Martin, & Bastin Tony Roy Savarimuthu (Eds.), Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XV (pp. 213--230). Springer International Publishing. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Marc Serramia,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2022). Instilling moral value alignment by means of multi-objective reinforcement learning. Ethics and Information Technology, 24. https://doi.org/10.1007/s10676-022-09635-0. [BibTeX]  [PDF]
2021
Jaume Agustí-Cullell,  & Marco Schorlemmer (2021). A Humanist Perspective on Artificial Intelligence. Comprendre, 23, 99--125. [BibTeX]
Ángeles Manjarrés,  Celia Fernández-Aller,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Manuel Sierra Castañer (2021). Artificial Intelligence for a Fair, Just, and Equitable World. IEEE Technology and Society Magazine, 40, 19-24. https://doi.org/10.1109/MTS.2021.3056292. [BibTeX]  [PDF]
Pablo Noriega,  Harko Verhagen,  Julian Padget,  & Mark d'Inverno (2021). Ethical Online AI Systems Through Conscientious Design. IEEE Internet Computing, 25, 58-64. https://doi.org/10.1109/MIC.2021.3098324. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Guaranteeing the Learning of Ethical Behaviour through Multi-Objective Reinforcement Learning. . Adaptive and Learning Agents Workshop at AAMAS 2021 (ALA 2021). [BibTeX]  [PDF]
Thiago Freitas Dos Santos,  Nardine Osman,  & Marco Schorlemmer (2021). Learning for Detecting Norm Violation in Online Communities. International Workshop on Coordination, Organizations, Institutions, Norms and Ethics for Governance of Multi-Agent Systems (COINE), co-located with AAMAS 2021 . https://arxiv.org/abs/2104.14911. [BibTeX]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Multi-Objective Reinforcement Learning for Designing Ethical Environments. Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI-21) (pp. 545-551). [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2021). On the dominant set selection problem and its application to value alignment. Autonomous Agents and Multi-agent Systems, 35. [BibTeX]  [PDF]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)