Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman


In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.

Ethics

Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pilar Dellunde
Adjunct Scientist
Phone Ext. 239

Ramon Lopez de Mantaras
Research Professor
Phone Ext. 258

Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 242

Pablo Noriega
Tenured Scientist
Phone Ext. 246

Nardine Osman
Tenured Scientist
Phone Ext. 245

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 218

Marco Schorlemmer
Tenured Scientist
Phone Ext. 203

Carles Sierra
Research Professor
Phone Ext. 231

2021
Filippo Bistaffa,  Christian Blum,  Jesús Cerquides,  Alessandro Farinelli,  & Juan A. Rodríguez-Aguilar (2021). A Computational Approach to Quantify the Benefits of Ridesharing for Policy Makers and Travellers. IEEE Transactions on Intelligent Transportation Systems, 22, 119-130. https://doi.org/10.1109/TITS.2019.2954982. [BibTeX]  [PDF]
Marco Schorlemmer,  & Enric Plaza (2021). A Uniform Model of Computational Conceptual Blending. Cognitive Systems Research, 65, 118--137. https://doi.org/10.1016/j.cogsys.2020.10.003. [BibTeX]  [PDF]
F. A. {Farinelli} (2021). Efficient Coalition Structure Generation via Approximately Equivalent Induced Subgraph Games. IEEE Transactions on Cybernetics, 1-11. https://doi.org/10.1109/TCYB.2020.3040622. [BibTeX]  [PDF]
Jesús Cerquides,  Juan A. Rodríguez-Aguilar,  Rémi Emonet,  & Gauthier Picard (2021). Solving Highly Cyclic Distributed Optimization Problems Without Busting the Bank: A Decimation-based Approach. Logic Journal of the IGPL, 29, 72-95. https://doi.org/10.1093/jigpal/jzaa069. [BibTeX]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2021). Towards a Competence-Based Approach to Allocate Teams to Tasks. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. [BibTeX]  [PDF]
Nieves Montes (2021). Value Engineering for Autonomous Agents -- Position Paper. [BibTeX]  [PDF]
Nieves Montes,  & Carles Sierra (2021). Value-Guided Synthesis of Parametric Normative Systems. Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021) . IFAAMAS. [BibTeX]  [PDF]
2020
Jordi Ganzer,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2020). A model to support collective reasoning: Formalization, analysis and computational assessment. arXiv preprint arXiv:2007.06850. [BibTeX]  [PDF]
Marc Serramia,  Maite Lopez-Sanchez,  & Juan A. Rodríguez-Aguilar (2020). A Qualitative Approach to Composing Value-Aligned Norm Systems. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1233--1241). [BibTeX]  [PDF]
Jerónimo Hernández-González,  & Jesús Cerquides (2020). A Robust Solution to Variational Importance Sampling of Minimum Variance. Entropy, 22, 1405. https://doi.org/10.3390/e22121405. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2020). A stochastic goal programming model to derive stable cash management policies. Journal of Global Optimization, 76, 333--346. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2020). A Structural Solution to Sequential Moral Dilemmas. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1152--1160). [BibTeX]  [PDF]
Anna Puig,  Inmaculada Rodríguez,  Josep Ll Arcos,  Juan A. Rodríguez-Aguilar,  Sergi Cebrián,  Anton Bogdanovych,  Núria Morera,  Antoni Palomo,  & Raquel Piqué (2020). Lessons learned from supplementing archaeological museum exhibitions with virtual reality. Virtual Reality, 24, 343--358. [BibTeX]  [PDF]
Nardine Osman,  Carles Sierra,  Ronald Chenu-Abente,  Qiang Shen,  & Fausto Giunchiglia (2020). Open Social Systems. Nick Bassiliades, Georgios Chalkiadakis, & Dave Jonge (Eds.), Multi-Agent Systems and Agreement Technologies (pp. 132--142). Springer International Publishing. [BibTeX]  [PDF]
Filippo Bistaffa,  Juan A. Rodríguez-Aguilar,  & Jesús Cerquides (2020). Predicting Requests in Large-Scale Online P2P Ridesharing. arXiv preprint arXiv:2009.02997. [BibTeX]  [PDF]
Dave de Jonge,  & Dongmo Zhang (2020). Strategic negotiations for extensive-form games. Autonomous Agents and Multi-Agent Systems, 34. https://doi.org/10.1007/s10458-019-09424-y. [BibTeX]  [PDF]
Athina Georgara,  Carles Sierra,  & Juan A. Rodríguez-Aguilar (2020). TAIP: an anytime algorithm for allocating student teams to internship programs. arXiv preprint arXiv:2005.09331. [BibTeX]  [PDF]
Jesús Vega,  M. Ceballos,  Josep Puyol-Gruart,  Pere García,  B. Cobo,  & F. J. Carrera (2020). TES X-ray pulse identification using CNNs. ADASS XXX . [BibTeX]  [PDF]
Marta Poblet,  & Carles Sierra (2020). Understanding Help as a Commons. International Journal of the Commons, 14, 281--493. https://doi.org/http://doi.org/10.5334/ijc.1029. [BibTeX]  [PDF]
Antoni Perello-Moragues,  & Pablo Noriega (2020). Using Agent-Based Simulation to Understand the Role of Values in Policy-Making. Harko Verhagen, Melania Borit, Giangiacomo Bravo, & Nanda Wijermans (Eds.), Advances in Social Simulation (pp. 355--369). Springer International Publishing. [BibTeX]
Nieves Montes,  & Carles Sierra (2020). Value Alignment Equilibrium in Multiagent Systems. 1st TAILOR workshop at the 24th European Conference on Artificial Intelligence. [BibTeX]  [PDF]
Paula Chocron,  & Marco Schorlemmer (2020). Vocabulary Alignment in Openly Specified Interactions. Journal of Artificial Intelligence Research, 68, 69--107. https://doi.org/10.1613/jair.1.11497. [BibTeX]
2019
Mariela Morveli Espinoza,  J.C. Nieves,  A. Possebom,  Josep Puyol-Gruart,  & C.A. Tacla (2019). An argumentation-based approach for identifying and dealing with incompatibilities among procedural goals. International Journal of Approximate Reasoning, 105, 1 - 26. https://doi.org/10.1016/j.ijar.2018.10.015. [BibTeX]
Mariela Morveli Espinoza,  Ayslan Trevizan Possebom,  Josep Puyol-Gruart,  & C.A Tacla (2019). Argumentation-based intention formation process. DYNA, 86, 82 - 91. https://doi.org/http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S0012-73532019000100082&nrm=iso. [BibTeX]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2019). Characterizing compromise solutions for investors with uncertain risk preferences. Operational Research, 19, 661--677. [BibTeX]  [PDF]
Marc Serramia,  Jordi Ganzer-Ripoll,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  Natalia Criado,  Simon Parsons,  Patricio Escobar,  & Marc Fernández (2019). Citizen Support Aggregation Methods for Participatory Platforms.. CCIA (pp. 9--18). [BibTeX]
Jordi Ganzer-Ripoll,  Natalia Criado,  Maite Lopez-Sanchez,  Simon Parsons,  & Juan A. Rodríguez-Aguilar (2019). Combining social choice theory and argumentation: Enabling collective decision making. Group Decision and Negotiation, 28, 127--173. [BibTeX]  [PDF]
Karla Trejo,  Pere García,  & Josep Puyol-Gruart (2019). Metadata Generation for Multi-Text Classification in Structured Data. Artificial Intelligence Research and Development (pp. 417-421). [BibTeX]
Antoni Perello-Moragues,  Pablo Noriega,  & Manel Poch (2019). Modelling Contingent Technology Adoption in Farming Irrigation Communities. Journal of Artificial Societies and Social Simulation, 22, 1. https://doi.org/10.18564/jasss.4100. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  David Pla-Santamaria,  & Ana García-Bernabeu (2019). On the formal foundations of cash management systems. Operational Research, 1--15. [BibTeX]  [PDF]
Inmaculada Rodriguez,  Anna Puig,  Juan A. Rodríguez-Aguilar,  Josep Lluis Arcos,  Sergi Cebrián,  Anton Bogdanovych,  Núria Morera,  Raquel Piqué,  & Antoni Palomo (2019). On the Relationship between Subjective and Objective Measures of Virtual Reality Experiences: a Case Study of a Serious Game. International Symposium on Gamification and Games for Learning (GamiLearn 2019) . CEUR-WS.org. [BibTeX]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  & David Pla-Santamaria (2019). On the use of multiple criteria distance indexes to find robust cash management policies. INFOR: Information Systems and Operational Research, 57, 345-360. https://doi.org/10.1080/03155986.2017.1282291. [BibTeX]
Marc Serramia,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Patricio Escobar (2019). Optimising Participatory Budget Allocation: The Decidim Use Case. Artificial Intelligence Research and Development (pp 193-202). IOS Press. https://doi.org/10.3233/FAIA190124. [BibTeX]
Juan Carlos Teze,  Antoni Perelló-Moragues,  Lluís Godo,  & Pablo Noriega (2019). Practical reasoning using values: an argumentative approach based on a hierarchy of values. Annals of Mathematics and Artificial Intelligence, 293-319. https://doi.org/10.1007/s10472-019-09660-8. [BibTeX]  [PDF]
Ewa Andrejczuk,  Filippo Bistaffa,  Christian Blum,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2019). Synergistic team composition: A computational approach to foster diversity in teams. Knowledge-Based Systems, 182. https://doi.org/10.1016/j.knosys.2019.06.007. [BibTeX]
Dave Jonge,  Tim Baarslag,  Reyhan Aydoğan,  Catholijn Jonker,  Katsuhide Fujita,  & Takayuki Ito (2019). The Challenge of Negotiation in the Game of Diplomacy. Marin Lujak (Eds.), Agreement Technologies 2018, Revised Selected Papers (pp. 100-114). Springer International Publishing. [BibTeX]
Carles Sierra,  Nardine Osman,  Pablo Noriega,  Jordi Sabater-Mir,  & Antoni Perello-Moragues (2019). Value alignment: A formal approach. Responsible Artificial Intelligence Agents Workshop (RAIA) in AAMAS 2019 . [BibTeX]  [PDF]
2018
Manfred Eppe,  Ewen Maclean,  Roberto Confalonieri,  Oliver Kutz,  Marco Schorlemmer,  Enric Plaza,  & Kai-Uwe Kühnberger (2018). A computational framework for conceptual blending. Artificial Intelligence, 256, 105-129. [BibTeX]
Filippo Bistaffa,  & Alessandro Farinelli (2018). A COP Model For Graph-Constrained Coalition Formation. Journal of Artificial Intelligence Research, 62, 133-153. https://doi.org/10.1613/jair.1.11205. [BibTeX]
Filippo Bistaffa,  & Alessandro Farinelli (2018). A COP Model for Graph-Constrained Coalition Formation (Extended Abstract). International Joint Conference on Artificial Intelligence (IJCAI-ECAI 2018) (pp. 5553-5557). AAAI Press. https://doi.org/10.24963/ijcai.2018/783. [BibTeX]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)