Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman

In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.


Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pilar Dellunde
Adjunct Scientist
Phone Ext. 239

Ramon Lopez de Mantaras
Research Professor
Phone Ext. 258

Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 242

Pablo Noriega
Tenured Scientist
Phone Ext. 246

Nardine Osman
Tenured Scientist
Phone Ext. 245

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 218

Marco Schorlemmer
Tenured Scientist
Phone Ext. 203

Carles Sierra
Research Professor
Phone Ext. 231

In Press
Filippo Bistaffa,  Georgios Chalkiadakis,  & Alessandro Farinelli (In Press). Efficient Coalition Structure Generation via Approximately Equivalent Induced Subgraph Games. IEEE Transactions on Cybernetics. [BibTeX]  [PDF]
Nardine Osman,  Ronald Chenu-Abente,  Qiang Shen,  Carles Sierra,  & Fausto Giunchiglia (In Press). Empowering Users in Online Open Communities. SN Computer Science. [BibTeX]  [PDF]
Pablo Noriega,  Harko Verhagen,  Julian Padget,  & Mark d'Inverno (In Press). Ethical Online AI Systems through Conscientious Design. IEEE Internet Computing. [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (In Press). Value-aligned AI: Lessons learnt from value-aligned norm selection. Philosophy and Technology. [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). A Cognitively-Inspired Model for Making Sense of Hasse Diagrams. Proc. of the 23rd International Conference of the Catalan Association for Artificial Intelligence (CCIA 2021), October 20-22, Lleida, Catalonia, Spain . [BibTeX]
Filippo Bistaffa,  Christian Blum,  Jesús Cerquides,  Alessandro Farinelli,  & Juan A. Rodríguez-Aguilar (2021). A Computational Approach to Quantify the Benefits of Ridesharing for Policy Makers and Travellers. IEEE Transactions on Intelligent Transportation Systems, 22, 119-130. [BibTeX]  [PDF]
Filippo Bistaffa (2021). A Concise Function Representation for Faster Exact {MPE}and Constrained Optimisation in Graphical Models. CoRR, abs/2108.03899. [BibTeX]  [PDF]
Jaume Agustí-Cullell,  & Marco Schorlemmer (2021). A Humanist Perspective on Artificial Intelligence. Comprendre, 23, 99--125. [BibTeX]
Ángeles Manjarrés,  Celia Fernández-Aller,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Manuel Sierra Castañer (2021). Artificial Intelligence for a Fair, Just, and Equitable World. IEEE Technology and Society Magazine, 40, 19-24. [BibTeX]  [PDF]
Marco Schorlemmer,  & Enric Plaza (2021). A Uniform Model of Computational Conceptual Blending. Cognitive Systems Research, 65, 118--137. [BibTeX]  [PDF]
Nieves Montes,  Nardine Osman,  & Carles Sierra (2021). Enabling Game-Theoretical Analysis of Social Rules. IOS Press. [BibTeX]  [PDF]
Dave Jonge,  & Dongmo Zhang (2021). GDL as a unifying domain description language for declarative automated negotiation. Autonomous Agents and Multi-Agent Systems, 35. [BibTeX]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Guaranteeing the Learning of Ethical Behaviour through Multi-Objective Reinforcement Learning. . Adaptive and Learning Agents Workshop at AAMAS 2021 (ALA 2021). [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Image Schemas and Conceptual Blending in Diagrammatic Reasoning: the Case of Hasse Diagrams. Amrita Basu, Gem Stapleton, Sven Linker, Catherine Legg, Emmanuel Manalo, & Petrucio Viana (Eds.), Diagrammatic Representation and Inference. 12th International Conference, Diagrams 2021, Virtual, September 28–30, 2021, Proceedings (pp. 297-314). [BibTeX]
Maite Lopez-Sanchez,  Marc Serramia,  & Juan A Rodríguez-Aguilar (2021). Improving on-line debates by aggregating citizen support. Artificial Intelligence Research and Development. IOS Press. [BibTeX]  [PDF]
Thiago Freitas Dos Santos,  Nardine Osman,  & Marco Schorlemmer (2021). Learning for Detecting Norm Violation in Online Communities. International Workshop on Coordination, Organizations, Institutions, Norms and Ethics for Governance of Multi-Agent Systems (COINE), co-located with AAMAS 2021 . [BibTeX]
Antoni Perello-Moragues,  Manel Poch,  David Sauri,  Lucia Alexandra Popartan,  & Pablo Noriega (2021). Modelling Domestic Water Use in Metropolitan Areas Using Socio-Cognitive Agents. Water, 13. [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Modelling the Sense-Making of Diagrams Using Image Schemas. Proc. of the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021), 26--29 July 2021, Vienna, Austria (pp. 1105-1111). [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Multi-Objective Reinforcement Learning for Designing Ethical Environments. Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI-21) (pp. in-press). [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2021). On the dominant set selection problem and its application to value alignment. Autonomous Agents and Multi-agent Systems, 35. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  David Pla-Santamaria,  & Ana García-Bernabeu (2021). On the formal foundations of cash management systems. Operational Research, 1081--1095. [BibTeX]  [PDF]
Antoni Perello-Moragues,  Pablo Noriega,  Lucia Alexandra Popartan,  & Manel Poch (2021). On Three Ethical Aspects Involved in Using Agent-Based Social Simulation for Policy-Making. Petra Ahrweiler, & Martin Neumann (Eds.), Advances in Social Simulation (pp. 415--427). Springer International Publishing. [BibTeX]  [PDF]
Josep Puyol-Gruart,  Pere Garcia Calvés,  Jesús Vega,  Maria Teresa Ceballos,  Bea Cobo,  & Francisco J. Carrera (2021). Pulse Identification Using SVM. Artificial Intelligence Research and Development, 339 (pp. 221--224). [BibTeX]  [PDF]
Jesús Cerquides,  Juan A. Rodríguez-Aguilar,  Rémi Emonet,  & Gauthier Picard (2021). Solving Highly Cyclic Distributed Optimization Problems Without Busting the Bank: A Decimation-based Approach. Logic Journal of the IGPL, 29, 72-95. [BibTeX]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2021). Towards a Competence-Based Approach to Allocate Teams to Tasks. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1504–1506). International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Nieves Montes,  & Carles Sierra (2021). Value-Alignment Equilibrium in Multiagent Systems. Fredrik Heintz, Michela Milano, & Barry O'Sullivan (Eds.), Trustworthy AI - Integrating Learning, Optimization and Reasoning (pp 189--204). Springer International Publishing. [BibTeX]  [PDF]
Nieves Montes (2021). Value Engineering for Autonomous Agents -- Position Paper. [BibTeX]  [PDF]
Nieves Montes,  & Carles Sierra (2021). Value-Guided Synthesis of Parametric Normative Systems. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 907–915). International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Jordi Ganzer,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2020). A model to support collective reasoning: Formalization, analysis and computational assessment. arXiv preprint arXiv:2007.06850. [BibTeX]  [PDF]
Marc Serramia,  Maite Lopez-Sanchez,  & Juan A. Rodríguez-Aguilar (2020). A Qualitative Approach to Composing Value-Aligned Norm Systems. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1233--1241). [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2020). A stochastic goal programming model to derive stable cash management policies. Journal of Global Optimization, 76, 333--346. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2020). A Structural Solution to Sequential Moral Dilemmas. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1152--1160). [BibTeX]  [PDF]
Anna Puig,  Inmaculada Rodríguez,  Josep Ll Arcos,  Juan A. Rodríguez-Aguilar,  Sergi Cebrián,  Anton Bogdanovych,  Núria Morera,  Antoni Palomo,  & Raquel Piqué (2020). Lessons learned from supplementing archaeological museum exhibitions with virtual reality. Virtual Reality, 24, 343--358. [BibTeX]  [PDF]
Antoni Perello-Moragues,  Pablo Noriega,  Lucia Alexandra Popartan,  & Manel Poch (2020). Modelling Policy Shift Advocacy. Mario Paolucci, Jaime Simão Sichman, & Harko Verhagen (Eds.), Multi-Agent-Based Simulation XX (pp. 55--68). Springer International Publishing. [BibTeX]  [PDF]
Nardine Osman,  Carles Sierra,  Ronald Chenu-Abente,  Qiang Shen,  & Fausto Giunchiglia (2020). Open Social Systems. Nick Bassiliades, Georgios Chalkiadakis, & Dave Jonge (Eds.), Multi-Agent Systems and Agreement Technologies (pp. 132--142). Springer International Publishing. [BibTeX]  [PDF]
Filippo Bistaffa,  Juan A. Rodríguez-Aguilar,  & Jesús Cerquides (2020). Predicting Requests in Large-Scale Online P2P Ridesharing. arXiv preprint arXiv:2009.02997. [BibTeX]  [PDF]
Dave de Jonge,  & Dongmo Zhang (2020). Strategic negotiations for extensive-form games. Autonomous Agents and Multi-Agent Systems, 34. [BibTeX]  [PDF]
Athina Georgara,  Carles Sierra,  & Juan A. Rodríguez-Aguilar (2020). TAIP: an anytime algorithm for allocating student teams to internship programs. arXiv preprint arXiv:2005.09331. [BibTeX]  [PDF]
Jesús Vega,  M. Ceballos,  Josep Puyol-Gruart,  Pere García,  B. Cobo,  & F. J. Carrera (2020). TES X-ray pulse identification using CNNs. ADASS XXX . [BibTeX]  [PDF]
Marta Poblet,  & Carles Sierra (2020). Understanding Help as a Commons. International Journal of the Commons, 14, 281--493. [BibTeX]  [PDF]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)