Ethics and AI

The research theme on ethics and AI aims to address a number of the ethical challenges put forward in AI, and provide the first building blocks towards the development of AI systems that adhere to our values and requirements. We propose novel computational methods and tools, underpinned by multidisciplinary research, that can make humans and machines understand their respective dynamic goals while strictly abiding by the values that inspire our societies. 

Contact: Nardine Osman


In the past few years, many initiatives arose attempting to address the issues of ethics and AI. Some were led by top tech industries, while others by top scientists in the relevant fields — from philosophy to AI. Amongst those is the IIIA-led Barcelona declaration for the proper development and usage of artificial intelligence in Europe. All those initiatives share a number of challenging ethical concerns, ranging from explainability, transparency, and accountability, to value alignment, human control and shared benefit.

At IIIA, we aim to address some of these ethical challenges. On the one hand, we wish to focus on the development of AI systems that adhere to our values and requirements. On the other hand, we want to focus on the impact of AI systems on the human interactions (amongst each other or with the AI system) to ensure we are promoting ethical interactions and collaborations. 

Our work on ethics and AI is underpinned by strong multidisciplinary research. For example, what system behaviour is deemed ethical and legal is key here. As such, we say the social sciences and humanities (from philosophy and law to social and cognitive sciences) lie at the heart of any research on ethics and AI. We argue that these fields should not only provide insights for AI research, but should actively and collaboratively participate in directing and advancing AI research.

In what follows, we present the principles underpinning IIIA's work on ethics and AI.

AI systems must be driven by people’s needs and values, and evolve with those evolving needs and values. 

This ensures AI systems thrive for our shared benefit, while adhering to our values.

The governance of AI must be democratised.

This gives people control over their AI systems, so they can have a say on how their technology should or should not behave. This not only demands novel democratic, interactive initiatives, but also a careful input from the fields of both ethics and law to help assess the dynamics between what people want, what is ethical and what is legal.

A humanist perspective is necessary for ethical human-machine collaboration. 

It is important to nourish a humanist perspective that situates AI within the larger human phenomenon we call 'intelligence' as it arises from the shared interrelations and interactions of humans, computational systems, and the environment.

A sample of our research interests is presented below.

Agreement Technologies

Giving humans a say in how their technologies function implies allowing them to collectively agree on such issues. Research areas like argumentation, negotiation, trust and reputation, computational social choice, and semantic alignment all provide means to support peers (humans and software agents) to collaboratively reach agreements. We argue that agreements should be value-driven. As such, introducing values to the research areas of agreement technologies is of utmost interest.

Learning & Reasoning

Learning when a system is not fulfilling its goals can help signal a need for change. Learning which norms better adhere to certain values or which norms suit a given community of peers best can help support the decision making process of how a system must change/evolve, as well as support what direction should this change take. We envision learning and reasoning mechanisms to support the humans' decision process.

Natural Language Processing

Having humans collectively agree on the needs and values that drive and govern their technologies implies: 1) having humans discuss and argue these needs and values, and 2) having the system engage in the humans' agreement process and understand the final decisions. As we cannot expect humans neither to be proficient in the formal languages used for specifying needs and values nor have discussions in a formal language, natural language processing becomes key for the human and machine to understand each other.

Norms & Normative Systems

Behaviour is what ensures needs are fulfilled and values are adhered to, and norms are what govern behaviour. Normative systems have traditionally been used in multiagent systems to mediate behaviour. We follow in those footsteps and propose using normative systems as means to mediating behaviour. We are especially interested in the relation between values and norms and ensuring (sometimes verifying) a normative systems adheres to predefined values.

Ethics

Defining values formally is a challenge on its own. Embedding values in AI systems and verifying the adherence of a system to a set of values is even a bigger challenge. All of this requires a careful close-knit collaboration with the field of ethics. Furthermore, while the need for human control has been argued as one of the principles of ethical AI, we cannot ignore the fact that humans may indeed reach "wrongful agreements", that is, unethical or illegal agreements. A careful analysis is thereby required to address issues like "wrongful agreements". Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.  

Legal Studies

Legal norms are used in AI and law to support reasoning about legal statements, and legal systems can be implemented as normative systems. As such, the collaboration with the field of legal studies is imperative. Furthermore, assessing the consequences and implications of giving the machine executive and judiciary powers (especially where the system automatically adapts and evolves) requires a careful close-knit collaboration with the field of legal studies. And as with the field of ethics, the legal perspective on dealing with "wrongful agreements" is also necessary. Understanding the dynamics between what is required by humans, what is deemed ethical, and what is legal may be key for the development of "ethical" AI systems.

Pilar Dellunde
Adjunct Scientist
Phone Ext. 431850

Ramon Lopez de Mantaras
Research Professor
Phone Ext. 431828

Maite López-Sánchez
Tenured University Lecturer
Phone Ext. 431821

Pablo Noriega
Tenured Scientist
Phone Ext. 431829

Nardine Osman
Tenured Scientist
Phone Ext. 431826

Juan A. Rodríguez-Aguilar
Research Professor
Phone Ext. 431861

Marco Schorlemmer
Tenured Scientist
Phone Ext. 431858

Carles Sierra
Research Professor
Phone Ext. 431801

In Press
Filippo Bistaffa,  Georgios Chalkiadakis,  & Alessandro Farinelli (In Press). Efficient Coalition Structure Generation via Approximately Equivalent Induced Subgraph Games. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2020.3040622. [BibTeX]  [PDF]
Nardine Osman,  Ronald Chenu-Abente,  Qiang Shen,  Carles Sierra,  & Fausto Giunchiglia (In Press). Empowering Users in Online Open Communities. SN Computer Science. [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (In Press). Value-aligned AI: Lessons learnt from value-aligned norm selection. Digital Society. [BibTeX]  [PDF]
2022
Nieves Montes,  Nardine Osman,  & Carles Sierra (2022). A computational model of Ostrom's Institutional Analysis and Development framework. Artificial Intelligence, 311, 103756. https://doi.org/10.1016/j.artint.2022.103756. [BibTeX]  [PDF]
Tomas Trescak,  Roger Lera-Leri,  Filippo Bistaffa,  & Juan A. Rodríguez-Aguilar (2022). Agent-Assisted Life-Long Education and Learning. Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems . International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2022). Allocating teams to tasks: an anytime heuristic competence-based approach. Dorothea Baumeister, & Jörg Rothe (Eds.), Multi-Agent Systems - 19th European Conference, {EUMAS}2022, Düsseldorf, Germany, September 14-16, 2022, Revised Selected Papers . Springer International Publishing. [BibTeX]  [PDF]
Dave de Jonge (2022). An Analysis of the Linear Bilateral {ANAC}Domains Using the {M}i{CRO}Benchmark Strategy. IJCAI 2022, Vienna, Austria . [BibTeX]  [PDF]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  Carles Sierra,  Ornella Mich,  Raman Kazhamiakin,  Alessio P. Approsio,  & Jean-Christophe Pazzaglia (2022). An Anytime Heuristic Algorithm for Allocating Many Teams to Many Tasks. Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems . International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Athina Georgara,  Juan A. Rodríguez-Aguilar,  & Carles Sierra (2022). Building Contrastive Explanations for Multi-Agent Team Formation. Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems . International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Jesús Vega,  M. T. Ceballos,  B. Cobo,  F. J. Carrera,  Pere Garcia Calvés,  & Josep Puyol-Gruart (2022). Event Detection and Reconstruction Using Neural Networks in TES Devices: a Case Study for Athena/X-IFU. Publications of the Astronomical Society of the Pacific, 134, 024504. https://doi.org/10.1088/1538-3873/ac5159. [BibTeX]  [PDF]
Dave de Jonge,  & Dongmo Zhang (2022). {GDL}as a Unifying Domain Description Language for Declarative Automated Negotiation. Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, & Matthew E. Taylor (Eds.), 21st International Conference on Autonomous Agents and Multiagent Systems, {AAMAS}2022, Auckland, New Zealand, May 9-13, 2022 (pp. 1935--1937). International Foundation for Autonomous Agents and Multiagent Systems {(IFAAMAS)}. https://doi.org/10.5555/3535850.3536158. [BibTeX]  [PDF]
Rodríguez Soto,  Marc Serramia,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar<code> (2022). Instilling moral value alignment by means of multi-objective reinforcement learning. Ethics and Information Technology, 24. https://doi.org/10.1007/s10676-022-09635-0. [BibTeX]  [PDF]
Dave de Jonge,  Filippo Bistaffa,  & Jordi Levy (2022). Multi-Objective Vehicle Routing with Automated Negotiation. Applied Intelligence. https://doi.org/10.1007/s10489-022-03329-2. [BibTeX]  [PDF]
Roger Lera-Leri,  Filippo Bistaffa,  Marc Serramia,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2022). Towards Pluralistic Value Alignment: Aggregating Value Systems through ℓₚ-Regression. Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems . International Foundation for Autonomous Agents and Multiagent Systems. [BibTeX]  [PDF]
Errikos Streviniotis,  Athina Georgara,  & Georgios Chalkiadakis (2022). ε−MC nets: A Compact Representation Scheme for Large Cooperative Game Settings. 15th International Conference on Knowledge Science, Engineering and Management (KSEM 2022), Singapore, August 6-8, 2022 . Springer-Verlag. [BibTeX]  [PDF]
2021
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). A Cognitively-Inspired Model for Making Sense of Hasse Diagrams. Proc. of the 23rd International Conference of the Catalan Association for Artificial Intelligence (CCIA 2021), October 20-22, Lleida, Catalonia, Spain . [BibTeX]
Filippo Bistaffa,  Christian Blum,  Jesús Cerquides,  Alessandro Farinelli,  & Juan A. Rodríguez-Aguilar (2021). A Computational Approach to Quantify the Benefits of Ridesharing for Policy Makers and Travellers. IEEE Transactions on Intelligent Transportation Systems, 22, 119-130. https://doi.org/10.1109/TITS.2019.2954982. [BibTeX]  [PDF]
Filippo Bistaffa (2021). A Concise Function Representation for Faster Exact {MPE}and Constrained Optimisation in Graphical Models. CoRR, abs/2108.03899. https://doi.org/https://arxiv.org/abs/2108.03899. [BibTeX]  [PDF]
Dave de Jonge,  Filippo Bistaffa,  & Jordi Levy (2021). A Heuristic Algorithm for Multi-Agent Vehicle Routing with Automated Negotiation. Frank Dignum, Alessio Lomuscio, Ulle Endriss, & Ann Now{\\'{e}} (Eds.), {AAMAS}'21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021 (pp. 404--412). {ACM}. https://doi.org/https://dl.acm.org/doi/10.5555/3463952.3464004. [BibTeX]  [PDF]
Jaume Agustí-Cullell,  & Marco Schorlemmer (2021). A Humanist Perspective on Artificial Intelligence. Comprendre, 23, 99--125. [BibTeX]
Carles Sierra (2021). AI's Responsible Agency. . [BibTeX]
Ángeles Manjarrés,  Celia Fernández-Aller,  Maite López-Sánchez,  Juan A. Rodríguez-Aguilar,  & Manuel Sierra Castañer (2021). Artificial Intelligence for a Fair, Just, and Equitable World. IEEE Technology and Society Magazine, 40, 19-24. https://doi.org/10.1109/MTS.2021.3056292. [BibTeX]  [PDF]
Marco Schorlemmer,  & Enric Plaza (2021). A Uniform Model of Computational Conceptual Blending. Cognitive Systems Research, 65, 118--137. https://doi.org/10.1016/j.cogsys.2020.10.003. [BibTeX]  [PDF]
Nieves Montes,  Nardine Osman,  & Carles Sierra (2021). Enabling Game-Theoretical Analysis of Social Rules. IOS Press. https://doi.org/10.3233/FAIA210120. [BibTeX]  [PDF]
Pablo Noriega,  & Txetxu Ausìn (2021). Ethical, Legal, Economic and Social Implications. Sara Degli Esposti, & Carles Sierra (Eds.), White Paper on Artificial Intelligence, Robotics and Data Science (pp 120-141). Consejo Superior de Investigaciones Científicas (España). [BibTeX]  [PDF]
Pablo Noriega,  Harko Verhagen,  Julian Padget,  & Mark d'Inverno (2021). Ethical Online AI Systems Through Conscientious Design. IEEE Internet Computing, 25, 58-64. https://doi.org/10.1109/MIC.2021.3098324. [BibTeX]  [PDF]
Dave de Jonge,  & Dongmo Zhang (2021). GDL as a unifying domain description language for declarative automated negotiation. Autonomous Agents and Multi-Agent Systems, 35. https://doi.org/10.1007/s10458-020-09491-6. [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Guaranteeing the Learning of Ethical Behaviour through Multi-Objective Reinforcement Learning. . Adaptive and Learning Agents Workshop at AAMAS 2021 (ALA 2021). [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Image Schemas and Conceptual Blending in Diagrammatic Reasoning: the Case of Hasse Diagrams. Amrita Basu, Gem Stapleton, Sven Linker, Catherine Legg, Emmanuel Manalo, & Petrucio Viana (Eds.), Diagrammatic Representation and Inference. 12th International Conference, Diagrams 2021, Virtual, September 28–30, 2021, Proceedings (pp. 297-314). [BibTeX]
Maite Lopez-Sanchez,  Marc Serramia,  & Juan A Rodríguez-Aguilar (2021). Improving on-line debates by aggregating citizen support. Artificial Intelligence Research and Development. IOS Press. [BibTeX]  [PDF]
Thiago Freitas Dos Santos,  Nardine Osman,  & Marco Schorlemmer (2021). Learning for Detecting Norm Violation in Online Communities. International Workshop on Coordination, Organizations, Institutions, Norms and Ethics for Governance of Multi-Agent Systems (COINE), co-located with AAMAS 2021 . https://doi.org/https://arxiv.org/abs/2104.14911. [BibTeX]
Antoni Perello-Moragues,  Manel Poch,  David Sauri,  Lucia Alexandra Popartan,  & Pablo Noriega (2021). Modelling Domestic Water Use in Metropolitan Areas Using Socio-Cognitive Agents. Water, 13. https://doi.org/10.3390/w13081024. [BibTeX]  [PDF]
Dimitra Bourou,  Marco Schorlemmer,  & Enric Plaza (2021). Modelling the Sense-Making of Diagrams Using Image Schemas. Proc. of the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021), 26--29 July 2021, Vienna, Austria (pp. 1105-1111). [BibTeX]  [PDF]
Manel Rodríguez Soto,  Maite López-Sánchez,  & Juan A. Rodríguez-Aguilar (2021). Multi-Objective Reinforcement Learning for Designing Ethical Environments. Proceedings of the 30th International Joint Conference on Artificial Intelligence, (IJCAI-21) (pp. 545-551). [BibTeX]  [PDF]
Marc Serramia,  Maite López-Sánchez,  Stefano Moretti,  & Juan A. Rodríguez-Aguilar (2021). On the dominant set selection problem and its application to value alignment. Autonomous Agents and Multi-agent Systems, 35. [BibTeX]  [PDF]
Francisco Salas-Molina,  Juan A. Rodríguez-Aguilar,  David Pla-Santamaria,  & Ana García-Bernabeu (2021). On the formal foundations of cash management systems. Operational Research, 1081--1095. [BibTeX]  [PDF]
Antoni Perello-Moragues,  Pablo Noriega,  Lucia Alexandra Popartan,  & Manel Poch (2021). On Three Ethical Aspects Involved in Using Agent-Based Social Simulation for Policy-Making. Petra Ahrweiler, & Martin Neumann (Eds.), Advances in Social Simulation (pp. 415--427). Springer International Publishing. [BibTeX]  [PDF]
Josep Puyol-Gruart,  Pere Garcia Calvés,  Jesús Vega,  Maria Teresa Ceballos,  Bea Cobo,  & Francisco J. Carrera (2021). Pulse Identification Using SVM. Artificial Intelligence Research and Development, 339 (pp. 221--224). [BibTeX]  [PDF]
Ignacio D. Lopez-Miguel,  Borja Fern\'{a}ndez Adiego,  Jean-Charles Tournier,  Enrique Blanco Vi\~{n}uela,  & Juan A. Rodriguez-Aguilar (2021). Simplification of Numeric Variables for PLC Model Checking. Proceedings of the 19th ACM-IEEE International Conference on Formal Methods and Models for System Design (pp. 10–20). Association for Computing Machinery. https://doi.org/10.1145/3487212.3487334. [BibTeX]
Jesús Cerquides,  Juan A. Rodríguez-Aguilar,  Rémi Emonet,  & Gauthier Picard (2021). Solving Highly Cyclic Distributed Optimization Problems Without Busting the Bank: A Decimation-based Approach. Logic Journal of the IGPL, 29, 72-95. https://doi.org/10.1093/jigpal/jzaa069. [BibTeX]
  • Council of the European Union Report: Presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change (Report)
  • IIIA's workshop on "Value-Driven Adaptive Norms", presented @ the Future Tech Week 2019, on 26 September 2019. (YouTube Slides )
  • Barcelona declaration for the proper development and usage of artificial intelligence in Europe (Declaration, Videos)