The Artificial Intelligence Research Institute (IIIA) offers up to 9 scholarships for the introduction to a research career, in the context of CSIC's JAE Intro ICU 2024 Programme.
Training plans and mentors:
IIIA-01: Robustness and Resilience of AI Sys...
The Artificial Intelligence Research Institute (IIIA) offers up to 9 scholarships for the introduction to a research career, in the context of CSIC's JAE Intro ICU 2024 Programme.
Training plans and mentors:
IIIA-01: Robustness and Resilience of AI Systems.
Mentor: Dr. Felip Manyà (felip@iiia.csic.es)
The primary objective of this project is to thoroughly examine the vulnerabilities and potential attack vectors present within AI systems deployed across diverse sectors such as cybersecurity, healthcare, agriculture, and more. The overarching goal is to gain a comprehensive understanding of the current limitations inherent in these systems and leverage these insights to fortify AI systems, ensuring robustness and resilience. The appointed candidate will be tasked with exploring the weaknesses within these AI systems, looking for vulnerabilities and developing attack methods designed to deliberately deceive and induce errors in the systems. Additionally, the candidate will explore an array of defense mechanisms aimed at mitigating the impact of adversarial attacks and failure models in AI systems. This will encompass the investigation and implementation of diverse strategies, including but not limited to smoothing-based techniques and the integration of adversarial training to bolster the systems’ resistance against adversarial attacks. Requirements: Proficiency in Python. Knowledge of Artificial Intelligence and Machine Learning algorithms. Experience with machine learning libraries such as TensorFlow and PyTorch. Preferred skills (Not mandatory): Knowledge of AI systems deployed in the cybersecurity domain, healthcare, agriculture, etcetera. Understanding of adversarial machine learning.
IIIA-02: Machine Learning for Prediction of Epilepsy Crisis from Electroencephalograms
Mentor: Dr. Jesús Cerquides (cerquide@iiia.csic.es)
This research project aims to employ machine learning methodologies to predict epilepsy crises by analyzing publicly available electroencephalogram (EEG) data. Recognizing the challenges associated with the timely prediction of epilepsy seizures, the study focuses on utilizing existing datasets to develop a predictive model. By leveraging machine learning algorithms, particularly deep learning models, the research will involve preprocessing publicly accessible EEG data, extracting relevant features, and implementing advanced classification techniques. The objective is to create a reliable and accurate predictive tool capable of discerning patterns and subtle changes in EEG signals preceding epileptic seizures.
IIIA-03: AI-Based Verification of AI Systems.
Mentors: Dr. Filippo Bistaffa (filippo.bistaffa@iiia.csic.es), Dr. Juan Antonio Rodríguez-Agular (jar@iiia.csic.es)
In the last few years, Artificial Intelligence (AI) has achieved notable momentum, enabling the achievement of impressive results. The deployment of AI is expanding in our lives and a broad range of industries, such as education, healthcare, finance, logistics, and law enforcement. Alongside such growth, there has been an increasing interest in developing human-centred AI that is trustworthy and that considers the ethical values in human society. Along these lines, the European Union has established guidelines to define its vision of an ethical and trustworthy AI. These guidelines indicate that AI systems should respect the current laws, ethical principles, and moral values. More recently, the European Parliament took a further step towards AI regulation: the use of AI in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. At this point, a major question arises: how to verify that a given system abides by some regulation (e.g., the AI Act)? This is a rather intricate issue because it requires that regulators are capable of thoroughly inspecting an AI system to verify whether its behaviour abides by the regulation in force. So far, AI has focused on safety issues from a developer’s perspective. Thus, research on safe AI (particularly on safe learning) has focused on guaranteeing an AI system's safe learning and behaviour. The goal is to prevent an AI system from reaching undesirable states when deployed in the world. Here we consider a different approach represented by the need for considering a third party during AI development: the regulator. The regulator dictates the rules that an AI must comply with before deployment. Therefore, verifying AI systems with respect to regulations becomes a novel task for which regulators require support. Along these lines, the purpose of this project is to develop a methodology for verifying AI with the aid of AI, namely for the AI-based verification of AI systems, by tackling novel scientific challenges: - How to inspect the behaviour of an AI system? - How to verify the behaviour of an AI system with respect to a regulation? We will consider the case study of autonomous driving, arguably the most natural and important one in the context of the verification of AI systems. We will consider the taxonomy of ethical values for autonomous driving proposed by Caballero et al. Later in the project, we will consider the EU AI Act or any other relevant regulation.
IIIA-04: Neurosymbolic AI: from Theory to Applications.
Mentors: Dr. Vicent Costa (vicent@iiia.csic.es), Dra. Pilar Dellunde (pilar@iiia.csic.es)
Neurosymbolic artificial intelligence (AI) is a recent domain in AI that seeks to merge the knowledge-based symbolic approach with neural network-based methods. It is mainly motivated by application-level regards (e.g., explainability and interpretability) and algorithmic-level considerations (e.g., long-term planning and analogy) and intends to merge the strengths of both approaches and overcome their corresponding drawbacks. The main goal of this project is to integrate principles and aspects from both approaches and to design hybrid systems in this emerging field of AI. The application domains would be related to tutors' previous works, especially to issues concerning people with different kinds of disability (e.g., evaluation of the quality of life of people with mental distress). The ideal candidates for this fellowship have excellent programming skills and knowledge of logic and theoretical computer science and are concerned with the ethical aspects of AI systems design.
IIIA-05: New Variants of the MiCRO Negotiation Strategy.
Mentor: Dr. Dave de Jonge (davedejonge@iiia.csic.es)
BACKGROUND: The topic of automated negotiation deals with the question how autonomous software agents can negotiate with each other. Specifically, it deals with scenarios in which two or more agents need to solve a problem together, even though they have conflicting interests. This means that the agents need to compromise and find a solution that is acceptable to everyone. In order to come to an agreement, the agents may propose solutions to each another, and each agent may accept or reject the proposals it receives from the other agents. A typical example is the case of a buyer and a seller that are bargaining over the price of a car. While the seller aims to sell the car for the highest possible price, he still needs to make sure the price is low enough for the buyer to accept the deal. Recently, an extremely simple new negotiation algorithm, called MiCRO, was introduced by Dr. Dave de Jonge which was shown to outperform almost all existing state-of-the-art negotiation algorithms, even though MiCRO is much simpler than those other algorithms. Unfortunately, however, MiCRO is only applicable to negotiations between no more than two agents, and only to problems for which the number of possible solutions is relatively small (less than a million). To deal with these limitations, dr. de Jonge has proposed some ideas on how MiCRO could be generalized to negotiations among more than two agents, and to negotiations with a larger number of possible solutions (several millions). GOALS OF THIS PROJECT: The goal of this project is for the student to implement these ideas (in Java or Python), perform experiments, and determine how well these new variants of MiCRO perform against state-of-the-art negotiation algorithms, and under which parameter settings. And perhaps, based on the results of those experiments, the student could even figure out ways to improve MiCRO even further. Optionally, the task can be made more challenging, by trying to implement an even more advanced algorithm that is applicable to astronomically large test cases (e.g. with 10 to the power 100 possible solutions). This would require the use of more complex search techniques, such as genetic algorithms or tree search.
IIIA-06: Application of Graph Neural Networks (GNN) to the Constraint Satisfiability Problem (SAT) and Optimization Problem (MaxSAT).
Mentor: Dr. Jordi Levy (levy@iiia.csic.es)
Background: Neural networks have had a tremendous impact on recent AI advances. These networks rely on the type of data being analyzed, such as convolutional neural networks. Several models have recently been proposed for graph analysis (GNN). The problem of finding a solution that satisfies all constraints (SAT) or maximizes the number of satisfied constraints (MaxSAT) is a classical area of research in AI. Over the past 20 years, significant advances have been made in the efficiency of techniques used for these problems, including clause learning, restarts, variable-selection heuristics, and clause deletion. However, radical new ideas are necessary to continue this progression. Recent works have analyzed attempts to use GNN to decide if a SAT problem is decidable, with inconclusive results. Objective: The objective is to apply graph neural networks to improve variable-selection and other heuristics in SAT and MaxSAT problems. Methodology: The methodology will begin by implementing (or reusing) basic algorithms for GNN and SAT. In the latter case, starting with local-search algorithms that require less computational effort will be the initial step. Additionally, implementing known approximate algorithms like survey or belief propagation for SAT may provide a set of cases for GNN learning algorithms. Using small examples for learning and larger SAT instances for testing, the technique's efficiency will be compared with existing ad-hoc heuristics. Candidates: Candidates should possess strong Python programming skills and a solid grasp of theoretical models. A degree (and a master's) in mathematics, physics, or computer science is desirable. Prior knowledge of GNN or SAT techniques is not mandatory, but a proactive disposition toward learning is essential.
IIIA-07: Advanced AI for Immersive Training Simulations.
Mentor: Dr. Jordi Sabater-Mir (jsabater@iiia.csic.es)
The project focuses on the development and implementation of realistic Non-Player Characters (NPCs) within simulated environments for training purposes. The objective is to enhance the immersive quality of training simulations by populating them with NPCs that exhibit lifelike behaviours and responses. The work involves designing and programming NPCs with advanced artificial intelligence algorithms including Large Language Models (LLMs) and planning, enabling them to adapt dynamically to changing scenarios, interact convincingly with trainees, and simulate a wide range of human-like behaviours. The goal is to push the boundaries of immersive training simulations, providing trainees with more realistic and challenging scenarios that better prepare them for real-world situations. Key Components of the Project: 1. Lifelike Behaviors: Develop NPCs with sophisticated AI algorithms to simulate human-like behaviors, enabling them to adapt dynamically to changing scenarios. 2. Interactive Realism: Use Large Language Models (LLMs) to enhance NPC interactions with trainees, fostering convincing and engaging dialogues. 3. Dynamic Adaptation: Create NPCs capable of dynamically adjusting to varying situations, providing a more challenging and realistic training experience. The candidate will work with researchers at the IIIA-CSIC in the context of partnership with the “Escola de Bombers de la Generalitat de Catalunya”. Courses in Artificial Intelligence, advanced knowledge of Python programming, and knowledge of the Unreal and/or Unity video game development environments will be highly valued.
IIIA-08: Analogical Inference in a Large Language Model.
Mentor: Dr. Enric Plaza (enric@iiia.csic.es)
Project ALLM: Analogical Inference in a Large Language Model Background Foundational models, specifically large language models (LLM), provide a variety of capabilities for developing AI systems. However, they come short of supporting ‘general’ problem solving (e.g. non-linear planning or design). Some capabilities of LLM approximate a limited form of analogical inference, especially as metaphors, i.e. linguistic analogy generation and understanding. However, the formation of new analogies to solve impasses in full-fledged problem-solving processes. Here at the IIIA, we have developed a formal model of analogy and concept blending. Since our model is implementation-independent it can be in principle integrated into an LLM. Assumptions underlying this project Foundational models, specifically large language models (LLM), provide among other aspects, a foundation of representation on commonsense knowledge, which are considered a prerequisite of human-like analogy formation and reasoning. Goals of the project Familiarization with state-of-the-art approaches to extract and manipulate ‘concept’ patterns from LLM latent variables representation. Analyze LLAM existing capabilities of analogical inference inside our formal framework; identify the range and limitations of existing analogical inference in an LLM and develop a taxonomy of the classes of analogy performed and those beyond the capabilities of an LLM- Optionally, study how analogy formation can be improved inside an LLM and cover more classes of analogy
IIIA-09: Explainable AI by Way of Embodied Cognition.
Mentor: Dr. Marco Schorlemmer (marco@iiia.csic.es)
Machine-learning systems based on deep neural networks are currently pattern-matching black boxes that make it difficult for both developers and users to understand when a particular set-up of a neural network is going to be successfully trained and deployed in a trustworthy and robust manner. This project aims to make deep-learning architectures more transparent to developers and users alike by increasing their degree of explainability by design, with those built-in concepts that are currently lacking and which may help to reveal their underlying assumptions and behaviour. We will draw from the insights of contemporary cognitive science on embodied cognition, which claims that human conceptualisation and understanding are largely grounded on our bodily experience and the interactions we establish with the environment at a sensorimotor level. We will explore how, by taking this perspective of cognition as a reference, we can contribute to one of the fundamental ethical objectives of AI for the coming years, namely the objective of explainability. At IIIA-CSIC we have developed mathematical and computational models of embodied cognition, applying them to mathematical conceptualisation, diagrammatic reasoning and musical creativity. For this particular project, we will team up with researchers from UAB’s Philosophy Department with expertise in embodied and enactive approaches to cognition. This is a highly interdisciplinary project, bringing together techniques from cognitive linguistics, computer science, mathematics, and philosophy.