Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman


In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.

Ethics

Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pilar Dellunde
Adjunct Scientist
Phone Ext. 239

Ramon Lopez de Mantaras
Research Professor
Phone Ext. 258

Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 242

Pablo Noriega
Tenured Scientist
Phone Ext. 246

Nardine Osman
Tenured Scientist
Phone Ext. 245

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 218

Marco Schorlemmer
Tenured Scientist
Phone Ext. 203

Carles Sierra
Research Professor
Phone Ext. 231

In Press
Filippo Bistaffa,  Georgios Chalkiadakis,  & Alessandro Farinelli (In Press). Efficient Coalition Structure Generation via Approximately Equivalent Induced Subgraph Games. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2020.3040622. [BibTeX]  [PDF]
Nardine Osman,  Ronald Chenu-Abente,  Qiang Shen,  Carles Sierra,  & Fausto Giunchiglia (In Press). Empowering Users in Online Open Communities. SN Computer Science. [BibTeX]  [PDF]
2021
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). A Cognitively-Inspired Model for Making Sense of Hasse Diagrams. Proc. of the 23rd International Conference of the Catalan Association for Artificial Intelligence (CCIA 2021), October 20-22, Lleida, Catalonia, Spain . [BibTeX]
Filippo Bistaffa,  Christian Blum,  Jesús Cerquides,  Alessandro Farinelli,  & Juan A. Rodríguez-Aguilar (2021). A Computational Approach to Quantify the Benefits of Ridesharing for Policy Makers and Travellers. IEEE Transactions on Intelligent Transportation Systems, 22, 119-130. https://doi.org/10.1109/TITS.2019.2954982. [BibTeX]  [PDF]
Jesus Cerquides,  Oguz Mulayim,  Jerónimo Hernández-González,  Amudha Ravi Shankar,  & Jose Luis Fernandez-Marquez (2021). A Conceptual Probabilistic Framework for Annotation Aggregation of Citizen Science Data. Mathematics, 9. https://doi.org/10.3390/math9080875. [BibTeX]  [PDF]
Jaume Agustí-Cullell,  & Marco Schorlemmer (2021). A Humanist Perspective on Artificial Intelligence. Comprendre, 23, 99--125. [BibTeX]
Marco Schorlemmer,  & Enric Plaza (2021). A Uniform Model of Computational Conceptual Blending. Cognitive Systems Research, 65, 118--137. https://doi.org/10.1016/j.cogsys.2020.10.003. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Guaranteeing the Learning of Ethical Behaviour through Multi-Objective Reinforcement Learning. . Adaptive and Learning Agents Workshop at AAMAS 2021 (ALA 2021). [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Image Schemas in Diagrammatic Reasoning: the Case of Hasse Diagrams. Proc. of the 12th Int. Conf. on the Theory and Application of Diagrams (Diagrams 2021), September 28--30 2021 . [BibTeX]
Thiago Freitas Dos Santos,  Nardine Osman,  & Marco Schorlemmer (2021). Learning for Detecting Norm Violation in Online Communities. International Workshop on Coordination, Organizations, Institutions, Norms and Ethics for Governance of Multi-Agent Systems (COINE), co-located with AAMAS 2021 . https://doi.org/https://arxiv.org/abs/2104.14911. [BibTeX]
Antoni Perello-Moragues,  Manel Poch,  David Sauri,  Lucia Alexandra Popartan,  & Pablo Noriega (2021). Modelling Domestic Water Use in Metropolitan Areas Using Socio-Cognitive Agents. Water, 13. https://doi.org/10.3390/w13081024. [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Modelling the Sense-Making of Diagrams Using Image Schemas. Proc. of the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021), 26--29 July 2021, Vienna, Austria (pp. 1105-1111). [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Multi-Objective Reinforcement Learning for Designing Ethical Environments. Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI-21) (pp. in-press). [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  David Pla-Santamaria,  & Ana García-Bernabeu (2021). On the formal foundations of cash management systems. Operational Research, 1081--1095. [BibTeX]  [PDF]
Antoni Perello-Moragues,  Pablo Noriega,  Lucia Alexandra Popartan,  & Manel Poch (2021). On Three Ethical Aspects Involved in Using Agent-Based Social Simulation for Policy-Making. Petra Ahrweiler, & Martin Neumann (Eds.), Advances in Social Simulation (pp. 415--427). Springer International Publishing. [BibTeX]  [PDF]
Jesús Cerquides,  Juan A. Rodríguez-Aguilar,  Rémi Emonet,  & Gauthier Picard (2021). Solving Highly Cyclic Distributed Optimization Problems Without Busting the Bank: A Decimation-based Approach. Logic Journal of the IGPL, 29, 72-95. https://doi.org/10.1093/jigpal/jzaa069. [BibTeX]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2021). Towards a Competence-Based Approach to Allocate Teams to Tasks. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1504–1506). International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Nieves Montes,  & Carles Sierra (2021). Value-Alignment Equilibrium in Multiagent Systems. Fredrik Heintz, Michela Milano, & Barry O'Sullivan (Eds.), Trustworthy AI - Integrating Learning, Optimization and Reasoning (pp 189--204). Springer International Publishing. https://doi.org/10.1007/978-3-030-73959-1_17. [BibTeX]  [PDF]
Nieves Montes (2021). Value Engineering for Autonomous Agents -- Position Paper. [BibTeX]  [PDF]
Nieves Montes,  & Carles Sierra (2021). Value-Guided Synthesis of Parametric Normative Systems. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 907–915). International Foundation for Autonomous Agents and Multiagent Systems. https://doi.org/https://dl.acm.org/doi/10.5555/3463952.3464060. [BibTeX]  [PDF]
2020
Jordi Ganzer,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2020). A model to support collective reasoning: Formalization, analysis and computational assessment. arXiv preprint arXiv:2007.06850. [BibTeX]  [PDF]
Marc Serramia,  Maite Lopez-Sanchez,  & Juan A. Rodríguez-Aguilar (2020). A Qualitative Approach to Composing Value-Aligned Norm Systems. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1233--1241). [BibTeX]  [PDF]
Jerónimo Hernández-González,  & Jesús Cerquides (2020). A Robust Solution to Variational Importance Sampling of Minimum Variance. Entropy, 22, 1405. https://doi.org/10.3390/e22121405. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2020). A stochastic goal programming model to derive stable cash management policies. Journal of Global Optimization, 76, 333--346. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2020). A Structural Solution to Sequential Moral Dilemmas. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1152--1160). [BibTeX]  [PDF]
Anna Puig,  Inmaculada Rodríguez,  Josep Ll Arcos,  Juan A. Rodríguez-Aguilar,  Sergi Cebrián,  Anton Bogdanovych,  Núria Morera,  Antoni Palomo,  & Raquel Piqué (2020). Lessons learned from supplementing archaeological museum exhibitions with virtual reality. Virtual Reality, 24, 343--358. [BibTeX]  [PDF]
Nardine Osman,  Carles Sierra,  Ronald Chenu-Abente,  Qiang Shen,  & Fausto Giunchiglia (2020). Open Social Systems. Nick Bassiliades, Georgios Chalkiadakis, & Dave Jonge (Eds.), Multi-Agent Systems and Agreement Technologies (pp. 132--142). Springer International Publishing. [BibTeX]  [PDF]
Filippo Bistaffa,  Juan A. Rodríguez-Aguilar,  & Jesús Cerquides (2020). Predicting Requests in Large-Scale Online P2P Ridesharing. arXiv preprint arXiv:2009.02997. [BibTeX]  [PDF]
Dave de Jonge,  & Dongmo Zhang (2020). Strategic negotiations for extensive-form games. Autonomous Agents and Multi-Agent Systems, 34. https://doi.org/10.1007/s10458-019-09424-y. [BibTeX]  [PDF]
Athina Georgara,  Carles Sierra,  & Juan A. Rodríguez-Aguilar (2020). TAIP: an anytime algorithm for allocating student teams to internship programs. arXiv preprint arXiv:2005.09331. [BibTeX]  [PDF]
Jesús Vega,  M. Ceballos,  Josep Puyol-Gruart,  Pere García,  B. Cobo,  & F. J. Carrera (2020). TES X-ray pulse identification using CNNs. ADASS XXX . [BibTeX]  [PDF]
Marta Poblet,  & Carles Sierra (2020). Understanding Help as a Commons. International Journal of the Commons, 14, 281--493. https://doi.org/http://doi.org/10.5334/ijc.1029. [BibTeX]  [PDF]
Antoni Perello-Moragues,  & Pablo Noriega (2020). Using Agent-Based Simulation to Understand the Role of Values in Policy-Making. Harko Verhagen, Melania Borit, Giangiacomo Bravo, & Nanda Wijermans (Eds.), Advances in Social Simulation (pp. 355--369). Springer International Publishing. [BibTeX]
Paula Chocron,  & Marco Schorlemmer (2020). Vocabulary Alignment in Openly Specified Interactions. Journal of Artificial Intelligence Research, 68, 69--107. https://doi.org/10.1613/jair.1.11497. [BibTeX]
2019
Mariela Morveli Espinoza,  J.C. Nieves,  A. Possebom,  Josep Puyol-Gruart,  & C.A. Tacla (2019). An argumentation-based approach for identifying and dealing with incompatibilities among procedural goals. International Journal of Approximate Reasoning, 105, 1 - 26. https://doi.org/10.1016/j.ijar.2018.10.015. [BibTeX]
Mariela Morveli Espinoza,  Ayslan Trevizan Possebom,  Josep Puyol-Gruart,  & C.A Tacla (2019). Argumentation-based intention formation process. DYNA, 86, 82 - 91. https://doi.org/http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S0012-73532019000100082&nrm=iso. [BibTeX]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2019). Characterizing compromise solutions for investors with uncertain risk preferences. Operational Research, 19, 661--677. [BibTeX]  [PDF]
Marc Serramia,  Jordi Ganzer-Ripoll,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  Natalia Criado,  Simon Parsons,  Patricio Escobar,  & Marc Fernández (2019). Citizen Support Aggregation Methods for Participatory Platforms.. CCIA (pp. 9--18). [BibTeX]
Jordi Ganzer-Ripoll,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2019). Combining social choice theory and argumentation: Enabling collective decision making. Group Decision and Negotiation, 28, 127--173. [BibTeX]  [PDF]
Karla Trejo,  Pere García,  & Josep Puyol-Gruart (2019). Metadata Generation for Multi-Text Classification in Structured Data. Artificial Intelligence Research and Development (pp. 417-421). [BibTeX]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)