The Artificial Intelligence Research Institute (IIIA) offers up to 6 scholarships for the introduction to a research career, in the context of CSIC's JAE Intro ICU 2025 Programme.
TRAINING PLANS AND MENTORS
IIIA-01: Theory of Multi-Objective Multi-A...
The Artificial Intelligence Research Institute (IIIA) offers up to 6 scholarships for the introduction to a research career, in the context of CSIC's JAE Intro ICU 2025 Programme.
TRAINING PLANS AND MENTORS
IIIA-01: Theory of Multi-Objective Multi-Agent Reinforcement Learning.
Mentor: Dr. Manel Rodríguez Soto
The purpose of this project is to investigate the formal frontiers of multi-objective reinforcement learning (MORL), an area with multiple applications such as Large Language Models (e.g., ChatGPT, DeepSeek), video games, or value alignment. The project will extend recent, theoretical work by IIIA-CSIC on multi-objective reinforcement learning published at the world-renowned NeurIPS conference. We will put a strong focus on evaluating MORL algorithms with our novel theoretical tools. This evaluation will happen in two fronts. First, we will develop novel metrics to assess the quality of the learned results of the algorithms. Second, we will design and deploy novel reinforcement learning algorithms specially crafted to compute these metrics. The results of this research will put in the spotlight the inconsistencies of current multi-objetive reinforcement learning algorithms. To guarantee that empirical results are formally significant, we will focus on algorithms with proven convergence guarantees.
IIIA-02: Robustness and Resilience of AI Systems.
Mentor: Dr. Daniel Gibert Llauradó
The primary objective of this project is to thoroughly examine the vulnerabilities and potential attack vectors present within AI systems deployed across diverse sectors such as cybersecurity and healthcare. The overarching goal is to gain a comprehensive understanding of the current limitations inherent in these systems and leverage these insights to fortify AI systems, ensuring robustness and resilience. The appointed candidate will be tasked with exploring the weaknesses within these AI systems, looking for vulnerabilities and developing attack methods designed to deliberately deceive and induce errors in the systems. Additionally, the candidate will explore an array of defense mechanisms aimed at mitigating the impact of adversarial attacks and failure models in AI systems. This will encompass the investigation and implementation of diverse strategies, including but not limited to smoothing-based techniques and the integration of adversarial training to bolster the systems’ resistance against adversarial attacks.
IIIA-03: Predicting the cost of computing explanations with Graph Neural Networks.
Mentor: Dr. Filippo Bistaffa
Over the past decades, there has been an increasing interest in developing trustworthy human-centred AI, i.e., an AI capable of providing explanations to users, preferably in the form of a real-time dialogue. Unfortunately, computing some types of explanations might be computationally costly (especially for AI systems explaining the outcome of hard combinatorial problems, such as the problem of finding the best path in a map or the problem of optimising the processes in a supply chain), hence disrupting the interaction with the user. Therefore, it would be desirable to predict the runtime of computing explanations to identify the explanations that can be provided in real-time, for a better interaction with the user.
Along these lines, the following question arises: how can we predict the cost of computing an explanation? This is a rather intricate issue because the cost of computing an explanation varies significantly depending on the problem instance, even for instances of the same size, as well as on the question posed by a user.
Ultimately, this project aims to develop a methodology for predicting the cost of computing an explanation with cutting-edge technologies in Deep Learning and Optimisation with great potential in today’s AI landscape. More specifically, we aim to employ Graph Neural Networks for predicting the cost of an explanation, and CPLEX, a state-of-the-art solver to solve optimisation problems. To evaluate our methodology we will consider two real-world problems widely studied in the optimisation literature (namely, the Winner Determination Problem for Combinatorial Auctions and the Resource-Constraint Project Scheduling Problem).
IIIA-04: Neurosymbolic AI: from Theory to Applications.
Mentor: Dr. Vicent Costa
Neurosymbolic artificial intelligence (AI) is a recent field in AI that aims to merge knowledge-based symbolic approaches with neural network-based methods. It is primarily driven by application-level considerations, such as explainability and interpretability, and seeks to combine the strengths of both approaches while overcoming their respective limitations. Qualitative Reasoning (QR) is a research area within AI that focuses on automating reasoning about continuous aspects of the physical world, like time and space, to support problem-solving and planning using qualitative rather than quantitative information. The main goal of this project is to use and integrate principles and techniques from neurosymbolic AI to design hybrid systems for QR. In this way, the application domains will be aligned with the tutor's previous work, particularly focusing on challenges related to individuals with disability or qualitative reasoning concerning color, lengths or angles. Ideal candidates for this fellowship should have excellent programming skills, a strong foundation in logic and theoretical computer science, and a keen awareness of the ethical aspects of AI system design.
IIIA-05: LLM-based architectures for interactive systems.
Mentor: Dr. Raquel Ros
The research project aims to design, develop, and evaluate the integration of LLMs in a cognitive robotics architecture within a human-robot interaction task in a domestic environment. More specifically, the goal is to provide robots with common-sense capabilities to fluidly interact with humans by integrating sub-symbolic and symbolic representations and reasoning.
The LLM is expected to handle dialogue with the user, initiate conversations, and extract the user's intent—i.e., what they expect the robot to do—while integrating the symbolic and sub-symbolic representation of the world. The extracted goal is then processed by the decision-making subsystem to define the sequence of actions needed to achieve it.
Key Components of the Project:
Cognitive Architecture Design: Design and develop the robot architecture based on the ROS 2 framework (an open-source robotic framework) to integrate existing perceptual pipelines (vision and voice), as well as control and navigation subsystems.
LLM-based context and dialogue integration: Design the LLM input (prompt) to extract the user’s intention and high-level steps required to achieve the goal, considering static and dynamic knowledge, interaction history, and the current state of the world.
Evaluation in a Domestic-Like Robotics Lab: The approach will be evaluated in a controlled robotics lab environment at IIIA. A user will interact with the robot, requesting a set of tasks. The evaluation will assess the extent to which the robot can handle unspecified natural-language instructions.
IIIA-06: Aprendizaje Automático en Sanidad Animal.
Mentor: Dr. Eva Armengol Voltas
Este proyecto busca desarrollar un sistema basado en aprendizaje automático para mejorar el diagnóstico y monitoreo de la salud equina. Se recopilarán datos de diversas fuentes, como imágenes médicas, registros veterinarios y sensores biométricos, con el objetivo de identificar patrones y predecir posibles afecciones. A través de algoritmos de aprendizaje supervisado y no supervisado, se entrenará un modelo capaz de detectar anomalías y proporcionar diagnósticos preliminares. Se programará una herramienta que permita tomar decisiones más rápidas y precisas, optimizando tratamientos y reduciendo costos. Además, se garantizará la seguridad y privacidad de los datos mediante el uso de protocolos adecuados.