The Schedule of the presentations is the following:
June 25
9:30 |
Stephanie Malvicini Characterization and quantification of misinformation in social platforms content Misinformation is becoming a big issue in current society, especially as users are exposed to information daily, on every social platform at all times. Misinformation can arise from errors or be spread intentionally and can affect the lives of people by altering their decisions and beliefs. In recent years, substantial effort has been made to identify, analyze, and prevent some of those phenomena. Still, most of them focus on one, such as fake news or polarization, on a specific area of influence, using tools that do not explode all the characteristics of the phenomena. Our work aims to create generic models to understand, quantify, and measure misinformation on social platforms. By combining state-of-the-art natural language processing, machine learning, argumentation, and knowledge-based models, among others, we seek to provide a holistic approach. Additionally, we aim to create tools to simulate the flow of misinformation and user interactions in the context of social platforms. |
10:00 |
Bjoern Komander Automated Monitoring of Online Communities This thesis investigates the complex dynamics governing the evolution and stability of online communities. Recognizing the challenges these platforms face, we leverage advancements in temporal and dynamic Graph Neural Networks (GNNs) to analyze the large-scale, time-varying relational data inherent in community interactions. The research involves developing and validating methods, including the use of synthetic network models, to ensure GNNs effectively capture fundamental network processes like growth and temporal shifts. We direct these techniques towards understanding critical community phenomena, such as evolving user activity patterns, shifts in network structure, and the overall factors influencing community health and longevity over time. The ultimate aim is to develop robust computational tools capable of monitoring and potentially predicting key dynamic changes within online communities, contributing to a deeper understanding of their lifecycle and sustainability. |
10:30 |
Coffee |
11:00 |
Alba Aguilera Agent-based Modeling for Equitable Policy-Making Artificial intelligence has the potential to play a crucial role in informing societal decision-making. AI-based simulations, in particular, can help anticipate the impact of legal norms in diverse environments in a non-invasive way. My research focuses on the development of agent-based models (ABMs) for policy-making in contexts of social inequity. This work is grounded in the Capability Approach (CA), a widely recognized framework for analyzing, promoting and assessing human well-being, development, and social justice. By combining the CA with reinforcement learning and multi agent-systems techniques (both in the decision-making of the agents and the structure of the simulation), this research aims to provide a robust modeling framework that serves as a basis to develop policy simulation tools. |
11:30 |
Roger Lera Leri Explainability for optimisation-based decision support systems In recent years, there has been an increasing interest in developing Artificial Intelligence (AI) systems centred on humans that are trustworthy, meaning that they have to be ethical, lawful, and robust. Within this new vision of AI, there is a strong consensus to require explainability in AI algorithms, i.e. the capacity to provide explanations of the decisions taken by such algorithms. Hence, the goal of this PhD thesis is to develop decision support systems that not only recommend the optimal solutions for different real-world problems, but also to develop a general framework for explainable AI that provides explanations of the decisions taken by our approaches. We aim at formalising such problems as convex optimisation problems, enabling the use of commercial-off-the-shelf solvers to solve such problems, and using real-world data instances to evaluate our approaches. |
12:00 |
Salvador Pullido Design and Evaluation of a Reinforced Learning (AR) model to improve Deliberative Quality Online This project proposes a specific approach using neural network architectures applied to natural language processing (NLP) such as Transformers and reinforcement learning (AR) algorithms to improve deliberative quality in online discussions. The inclusion of the Deliberative Quality Index (DQI) and the design of a personalized objective function will allow obtaining a highly specific model adapted to the challenges of evaluating deliberative quality. This approach has the potential to significantly advance the understanding and application of reinforcement learning techniques in the context of digital democracy and online citizen participation. With this research work, we seek to delve deeper into the field of natural language processing (NLP), focusing on improving the deliberative quality in online discussions. A variety of NLP neural network architectures will be explored, with the aim of developing a reinforcement learning model tailored for this purpose. The aim is to define an objective and loss function that guide the model towards specific improvements in the dimensions of the Deliberative Quality Index (DQI). In addition, an experimental validation of the model will be carried out using labeled or unlabeled data sets, demonstrating its ability to adapt and improve quality progressively over time. |
12:30 |
Shuxian Pan Automated Extraction of Online Community Norms To develop cooperative multiagent systems effectively, we aim to create an architecture that facilitates the agents' dynamic adoption of conventions. It expands an existing agent model's action selection architecture with a component that applies Natural Language Processing techniques with a domain-specific knowledge base. This component embeds conventions into agent interaction strategies to improve the predictability of other agents' actions. At the same time, Natural Language Processing techniques allow the users to introduce the conventions or provide their domain-specific knowledge to the multi-agent system in a more user-friendly way. |
13:00 |
Lunch |
14:30 |
Laura Rodriguez Cima Value-Driven Negotiation in Multi-Agent Systems: Embedding Social Preferences Within Utility Functions This thesis presents a self‐contained, value‐driven negotiation framework in which each autonomous agent internalizes its own notion of fairness or other values directly within its utility function, removing dependence on external fairness benchmarks such as Nash or Kalai points. We formalize this dual‐component utility model, combining an individual‐utility term with a social‐utility term that dynamically aggregates estimates of opponents’ individual utilities. To support genuine value alignment, we introduce a lightweight feedback language embedded in the Alternating Offers Protocol, enabling agents to refine one another’s value‐weight models during negotiation. Implemented using the NegMAS platform, our approach is validated through a simple bilateral use case of two students dividing a prize. Experiments under various configurations show how embedded social commitments influence negotiation dynamics, such as the final agreement and convergence speed measured by the number of rounds to agreement. Finally, we outline a roadmap for scaling to more complex multilateral scenarios. |
15:00 |
Arnau Mayoral Macau Automated design of ethical environments using multi-agent reinforcement learning Reinforcement learning (RL) algorithms have seen many applications, from game playing to autonomous driving and conversational agents, demonstrating a remarkable ability to learn complex tasks. However, while these algorithms may excel in learning to optimize a given reward function, ensuring that they behave according to human moral values while doing so is a significant challenge. On my thesis, we are working on an algorithm to automatically design ethical RL environments with large state spaces in such a way that the optimal behavior for the agents to learn is value aligned. We do so by adding an extra objective, called ethical objective, with its own system of rewards that incentives ethical actions and punishes unethical behavior. Then, our algorithm computes a so called ethicalweight which will be used to linearly combine the individual task of the environment with the ethical objective. With this linear combination we transform a multi-objective problem into a single-objective problem, where agents learn with just one reward function. Our contribution is how to compute the minimal weight needed to make the ethical objective prevail over the individual objective, so when agents learn using the combined reward function, all agents learn to do their task while abiding by the moral values encoded in the ethical objective. |
15:30 |
Jaume Reixach Leveraging Machine Learning for Metaheuristics Metaheuristics are general frameworks designed to find high-quality solutions to combinatorial optimization problems. This thesis primarily focuses on integrating machine learning techniques into existing metaheuristics, with the goal of improving their performance. Two main paradigms are explored in this context. The first paradigm involves online learning methods. So far, this has consisted of implementing two learning mechanisms within the Construct, Merge, Solve and Adapt (CMSA) metaheuristic, enhancing its performance. One draws inspiration from the classic multi-armed bandit problem in Reinforcement Learning, while the other is Deep Learning based, allowing it to achieve a higher degree of detail in the learning process. The second paradigm enhances metaheuristics through offline learning. This approach has led to the development of an evolutionary-based framework, which has been applied for learning the heuristic function of a tree search method and improving the performance of a genetic algorithm by learning high-quality individuals which are used as guidance. |
16:00 |
Marta Domenech Data and politics: An aproximation to data performativity This industrial doctoral thesis explores the evolving conceptualization of data within the Big Data framework, which has largely been shaped by technological deterministic perspectives. This approach often detaches data from its origin, neglecting its social, environmental, and political consequences. The disconnect between the creation and analysis of data, along with the alienated narratives emerging from it, influences how individuals and institutions engage with and interpret data. With the media discourse increasingly focused on the impact of AI, which itself is a complex system of data, the need to explore the intersections between technology, society, and politics becomes ever more pressing. In this context, understanding data as a political artifact through the lens of Science and Technology Studies (STS) is essential to gain a deeper insight into the historical, cultural, and social implications of data. The primary objective of this research is to examine how data operates within the STS framework, with a particular focus on its performative capacity, or its potential to effect political change. |
16:30 |
Raul Espejo Boix Neurosymbolic Rule Extraction for Explainable AI Explainability is considered to be the extent to which a model or a prediction are human understandable. The difficulty of understanding the outputs of Deep Neural Networks has emphasized the relevance of explainability in machine learning. Symbolic approaches to AI prioritize human-understandable knowledge. Meanwhile, subsymbolic AI provides high accuracy with data-driven methods. Neurosymbolic AI seeks to integrate DNN in symbolic-based methods, standing in the crossroad of both perspectives. This thesis approaches the Neurosymbolic field from both perspectives: On one hand, leveraging the link between vector quantization and machine learning for Rule Extraction, following recent developments on the field. On the other hand, linking quantization to granulation in order to encode the output of Neural Networks into Formal Languages with well-established logics. |
June 26
9:30 |
Jairo Lefebre Argumentation based multi-agent recommender system In recent years, there has been a growing emphasis on the development of Artificial Intelligence (AI) systems that prioritize human-centric design, emphasizing trustworthiness, ethics, legality, and robustness. Central to this evolving paradigm is the demand for explainability within AI algorithms, whereby these systems can articulate the rationale behind their decisions. Consequently, the primary objective of this research is to design and implement a multi-agent recommender system that not only offer proper recommendations for various real-world scenarios, but also integrate an argumentation framework to define the communication protocol between agents. Our aim is to formalize how to explain the final recommendation by using all recommendations provided by several agents within the context of argumentation theory while employing authentic datasets to validate our approaches. |
10:00 |
Rocco Ballester Harnessing Quantum Computing for Advancements in Federated Learning and Optimization The rapid advancements in quantum computing are opening up new possibilities in areas like machine learning and optimization. Quantum computers, with their unique capabilities, can potentially revolutionize how we solve intricate computational problems. This could lead to significant improvements in the efficiency and effectiveness of machine learning algorithms and optimization techniques, enabling breakthroughs in various scientific and industrial applications. This industrial PhD program aims to investigate the integration of quantum computing techniques into federated learning, with a particular emphasis on discovering innovative methods to optimize federated learning processes. Furthermore, the research will explore the use of quantum annealers for optimization tasks, providing novel solutions to difficult optimization problems. |
10:30 |
Break |
11:00 |
Lluis Subirana When does a belief become knowledge? The objective of this thesis is to advance in the understanding of the formal distinction among the fundamental notions of beliefs and knowledge, being the former intuitively stronger than the latter. Subsequently our aim is also to understand up to which extent, and according to which rules, a belief can be converted in a knowledge by gaining what we can call an ‘’epistemic value’’. Methodologically speaking we will base our investigations on two pillars: the Lockean thesis (Foley) from one side, and a Bayesian/probabilistic version of revisions (Alchourron, Gärdenfors and Makinson) and updates (Katsuno Mendelzon) for the other. Intuitively, while Lockean thesis defines an agent beliefs in terms of subjective probabilistic values, belief change theories describes how a belief can be converted in knowledge by subsequent steps of revisions and updates. |
11:30 |
Delia Coluccino Tools and Strategies for Citizens' Participation in Digitally-Accelerated Urban Ecosystems My PhD thesis investigates how deliberative democracy could be updated to account for increasingly digitized urban environments. As cities adopt smart technologies like Digital Twins (DTs) to optimize services and infrastructure, issues related to democratic participation and who gets to influence urban decision-making overbearingly emerge. The research explores the risks tied to technocratic governance and the decline of civic engagement, especially as decisions become more data-driven. At the same time, it acknowledges the potential of digital tools to open up access—particularly for marginalized communities—if thoughtfully designed and implemented. A key focus is on how to move beyond one-off participation events, toward forms of engagement that are sustainable, inclusive, and responsive to the complex mix of urban stakeholders. The thesis also addresses how digital acceleration intersects with social vulnerability, proposing strategies to make participation more equitable. Finally, the work grounds these reflections in a case study of Bologna’s DT project, aiming to identify models and practices for citizen involvement in the governance of smart cities. |
12:00 |
Elifnaz Yangin Inference systems for MaxSAT The Boolean Satisfiability problem (SAT) is the paradigmatic NP-complete problem. This is the problem of deciding whether a Boolean propositional for- mula can be satisfied. MaxSAT, is an optimization version of SAT that aims to find an assignment that maximizes the number of satisfied formulas of a given multiset of propositional formulas. These problems are significant because many practical problems can be reduced to them and solved using an off-the-shelf SAT or MaxSAT solver. While SAT is employed to solve decision problems, MaxSAT is utilized to solve optimization problems. Despite the potential of MaxSAT resolution in solving combinatorial optimization prob- lems, it has not yet been thoroughly explored from a practical perspective. Our aim is to fill this gap, as well as to study new inference systems for MaxSAT and MinSAT, where MinSAT is the dual problem of MaxSAT and its goal is to find the minimum number of clauses that can be satisfied in a given multiset. |
12:30 |
Daniel Pardo AI-Powered cheese manufacturing: a digital future for dairy Milk’s high nutritional value and versatility make it a key raw material for products such as yogurt, cheese, butter, milk powder, and cream, all of which extend shelf life. Its composition can vary owing to factors such as the animals’ diet, stage of lactation, genetic background, and health and welfare. This variability determines how the milk will be used and the performance and quality of the products obtained. Cheese, the most important derivative for direct consumption and as an ingredient, relies on three critical stages: coagulation, syneresis, and ripening. Coagulation sets the course for the rest of the process: the exact moment the curd is cut determines whey syneresis, solids retention, and ultimately final texture and yield. The objective of this thesis is to develop an artificial intelligence-based predictive model for the key variables in cheese production, improving consistency and industrial efficiency. |
13:00 |
Lunch |
14:30 |
Marina Prat Ethnographic study of robotics in orthopedic surgery / Estudi etnogràfic de la robòtica en cirurgia ortopèdica The introduction of surgical robots in the operating room brings about significant changes in hospitals' daily practices. In the field of orthopedic surgery and traumatology, the uptake of robot-assisted surgery has increased in the last decade, especially in knee arthroplasties. In these procedures, robots enable surgeons to plan intraoperatively and aid in the execution of more personalized prosthesis placement. Although this increase in precision and accuracy during surgery is highly promising, scientific evidence about the superiority of robotics over conventional methods is limited. New technologies have a direct impact on daily practices of operating rooms and hospital organization and transform the skills of professionals involved in these interventions and patients’ experiences in ways that are hard to quantify or have not yet been identified. Through ethnography, this thesis aims to understand how the introduction of robots transforms the field of orthopedic surgery by diving into the knee surgery department in Hospital Clínic Barcelona, where one of these robots was adopted 5 years ago. Informed by a Science and Technology Studies (STS) framework, the ethnography consists of semi-structured interviews and participant observations in the hospital focusing on how bodies, spaces and practices in the hospital change for both patients and professionals. |
15:00 |
Valeria Giustarini Algebraic Constructions in Nonclassical Logics This thesis focuses on the mathematical study of nonclassical logics, which are meant to represent and study different kinds of reasoning. For example, they allow to deal with incomplete, partial, or inconsistent information. In particular, we are interested in the framework of substructural logics, which are a broad class of nonclassical logics that includes many of the most well-known systems; e.g., many-valued logics (which allow partial truth, or degrees of truth), intuitionistic logic (interpreting mathematical constructivism), some paraconsistent logics (where the logical system can handle contradictory information), and relevance logics (which emphasize meaningful connections between premises and conclusions in deductions). All these logics can be represented by means of algebraic structures called residuated lattices, which provide a common, unifying semantical framework. Semantical methods in general, and algebraic ones in particular, have proven effective in the study of logic, as they allow one to study syntactical properties from the algebraic perspective. These methods have indeed played a central role in the development of substructural logics throughout the last century. Residuated lattices have been widely studied, and are known to have a rich and deep theory. However, large classes of these structures still lack an effective algebraic description, hindering the understanding of the corresponding logical systems. To help bridge this gap, the first part of this thesis is dedicated to the development of novel algebraic constructions for the description of those algebras that lack an effective characterization. A key method is to develop constructions that obtain new structures from known ones. To this end, we will draw on techniques belonging to the mathematical domains of universal algebra and topology, which have proven extremely fruitful in the study of residuated lattices and substructural logics. The second part of the thesis focuses on applying these constructions to the study of syntactical properties of substructural logics, which can be relevant in applications to model checking or automated proof systems (e.g., the interpolation property, or unification problems). This line of work will allow us to push the frontiers of our structural understanding of residuated lattices and deepen our semantic insight into substructural logics. |
15:30 |
Alejandra Lopez de Aberasturi Gomez Multiagent Models of Human Teamwork Productivity Humans possess innate collaborative capacities. However, effective teamwork often remains challenging. Social psychology and game theory provide valuable insights into the factors influencing voluntary collaboration in teamwork settings. This thesis delves into the feasibility of collaboration within teams of self-interested agents who engage in teamwork without the obligation to contribute. By integrating insights from aggregative game theory, reinforcement learning and social psychology, we seek to model realistic teamwork settings and to design a framework that learns approximations of realistic teamwork strategies. This research can contribute to the development of evidence-based approaches for designing teamwork settings that favour cooperation in teams and societies of humans and/or machines where cooperation is not enforced. |
16:00 |
Víctor Bermejo Gil Between Humans, Machines This doctoral research investigates human-robot interaction in the context of socially assistive robots (SARs) used in healthcare and caregiving. These technologies are increasingly deployed to support vulnerable populations and alleviate staff shortages, yet their integration raises ethical and social concerns, including dehumanization, privacy, and technological dependency. Grounded in Science and Technology Studies (STS) and empirical philosophy, the project explores how SARs are designed, implemented, and experienced in real-world settings. It focuses on how agency is distributed among robots, users, caregivers, and developers, how decisions about robot functions are made, and how standardization shapes social interactions. A central aspect of the research is the critical examination of the standards and conceptual frameworks that guide the design of SARs and their embedded AI systems. These frameworks not only define technical capabilities but also reflect sociocultural values and assumptions that influence how robots are perceived and used. |
16:30 |
Ion Mikel Liberal Proof complexity beyond Resolution The Boolean satisfaction problem (SAT) consists in determining whether a propositional CNF formula can be satisfied within the framework of classical logic. Although it has purely logical motivations on its own, SAT is frequently used to model a wide range of problems in computer science and industry. In the optimization variant of SAT, known as MaxSAT, the goal is not only to determine satisfiability but also to maximize the number of satisfied clauses (or equivalently, minimize the number of falsified ones). In practice, several approaches are employed to solve MaxSAT and its variants, most notably Branch-and-Bound and SAT-based algorithms. Among SAT-based methods, the most competitive are "core-guided" algorithms, which extract unsatisfiable subformulas using SAT solvers to guide the search toward an optimal solution—examples include the Fu-Malik and OLL algorithms.In this context, we propose CSat, a novel SAT-based algorithm that, rather than being guided by cores, attempts to infer them. Additionally, we introduce the Comparator Calculus, a formal proof system that leverages a propositional proof system as an oracle and is designed to model SAT-based reasoning in MaxSAT solvers. |
July 10
10:00 |
David Gomez Guillen Improving simulation model calibration for Cost-Effectiveness Analysis via Bayesian methods The use of mathematical simulation models of diseases in economic evaluation is an essential and common tool in medicine aimed at guiding decision-making in healthcare. Cost-effectiveness analyses are a type of economic evaluation that assess the balance between health benefits and the economic sustainability of different health interventions. One critical aspect of these models is the accurate representation of the disease's natural history, which requires a set of parameters such as probabilities and disease burden rates. While these parameters can be obtained from scientific literature, they often need calibration to fit the model's expected outcomes. However, the calibration process can be computationally expensive and traditional optimization methods can be time-consuming due to relatively simple heuristics that may not even guarantee feasible solutions. This thesis explores the application of Bayesian optimization to improve the calibration process, leveraging domain-specific knowledge and exploiting structural properties to efficiently handle multiple constraints in high-dimensional functions with a sequential block decomposition coupled with data-driven embeddings. |
11:30 |
Guillem Rodríguez Corominas Algorithms and techniques for computer assisted anastylosis Fragmented image reassembly reconstructs 2D images from torn or broken pieces, with applications in archaeology, art restoration, and forensics. This thesis focuses on computerized anastylosis—a process that automates reconstruction of fragmented artifacts—addressing challenges such as missing pieces, erosion, color degradation, and irregular fragment shapes. We begin by developing a realistic instance generator that fragments real-world paintings into varied shapes (both concave, convex) and sizes, enabling rigorous benchmarking. To extract meaningful features from degraded fragments, we introduce “SoftBinReduce,” a soft-binning reduction technique for color quantization that accelerates color quantization while preserving fidelity. Complementing this, an accelerated k-means++ initialization algorithm leverages the triangle inequality and norm-based filters to reduce unnecessary distance calculations, yielding significant speedups over existing methods. Building on these foundational steps, we propose a graph-based pairwise filtering approach that fuses texture and color descriptors to prune unlikely neighbor candidates, enhancing robustness to erosion and discoloration. Recognizing the limitations of supervised matching in settings lacking annotated data, we outline plans for unsupervised or semi-supervised matching methods tolerant of incomplete contours and color loss. For the global reassembly phase, we advocate for incremental metaheuristics—specifically Ant Colony Optimization (ACO) and GRASP—over deterministic or genetic approaches, allowing the algorithm to recover from early placement errors and converge on globally consistent reconstructions. All components will be integrated into a C++ library. |