Out of the many contributions to the field of AI by Hopfield and Hinton, I will present the ideas of Hopfield network and Boltzmann machine, which are the fundamental models that got them the Nobel prize.
Jesus Cerquides is a researcher at the Artificial Intelligence Research Institute (AI) of the Spanish National Research Council (CSIC). His previous positions include an assistant professorship at the University of Barcelona and University Pablo Olavide, Chief Technology Officer and board member at Intelligent Software Components S.A., and Associate Director at Union Bank of Switzerland (UBS AG). His research interests include probabilistic and causal machine learning, multi-agent systems, and democratic and AI-enhanced citizen science. He serves regularly as a senior PC member in major AI conferences such as IJCAI and AAAI. He is currently associate editor of the Artificial Intelligence Journal.
The Tsetlin machine is a new universal artificial intelligence (AI) method that learns simple logical rules to understand complex things, similar to how an infant uses logic to learn about the world. Being logical, the rules become understandable to humans. Yet, unlike all other intrinsically explainable techniques, Tsetlin machines are drop-in replacements for neural networks by supporting classification, convolution, regression, reinforcement learning, auto-encoding, language models, and natural language processing. They are further ideally suited for cutting-edge hardware solutions of low cost, enabling nanoscale intelligence, ultralow energy consumption, energy harvesting, unrivalled inference speed, and competitive accuracy. In this seminar, I cover the basics and recent advances of Tsetlin machines, including inference and learning, advanced architectures, and applications.
Prof. Ole-Christoffer Granmo is the Founding Director of the Centre for Artificial Intelligence Research (CAIR) at the University of Agder, Norway. He obtained his master’s degree in 1999 and his PhD degree in 2004, both from the University of Oslo, Norway. In 2018, he created the Tsetlin machine, for which he was awarded the AI research paper of the decade by the Norwegian Artificial Intelligence Consortium (NORA) in 2022. Dr. Granmo has authored 180+ refereed papers with eight paper awards in machine learning, encompassing learning automata, bandit algorithms, Tsetlin machines, Bayesian reasoning, reinforcement learning, and computational linguistics. He has further coordinated 7+ research projects and graduated 55+ master- and nine PhD students. Dr. Granmo is also a co-founder of NORA. Apart from his academic endeavors, he co-founded Anzyz Technologies AS and Tsense Intelligent Healthcare AS, and is an advisor at Literal Labs.
This seminar sheds light on the role of diversity in AI research and development and how AI systems become discriminating. The types of bias will be discussed, in particular also the designer bias, which is often forgotten. A framework is then presented to help us all conduct discrimination-free AI research.
This seminar is part of the series of conferences “Women in Tech”, initiated by the Consulate General of the Federal Republic of Germany last year and whose objective is to visualize the presence of female experts in the field of research, science, and technology as well as to promote networking between German experts and their counterparts in Barcelona.
You can register now to attend (in person or online) in this link. The zoom link will be sending days before the seminar.
Program:
About the lecturer
Dr. Kinga Schumacher studied computer science in Mannheim and completed her PhD in the field of artificial intelligence at the University of Potsdam. She is a senior researcher and deputy head of the Cognitive Assistance Systems research group at the German Research Center for Artificial Intelligence (DFKI). Her research focuses on the areas of "diversity-aware AI" and human-machine interaction. She is also involved in mapping the AI landscape about AI methods and the capabilities of AI systems. This work flows into the regulatory activities of Germany and the EU.
https://www.dfki.de/en/web/about-us/employee/person/kisc01
https://de.linkedin.com/in/dr-kinga-schumacher
Evaluation and Committees for the ALLIES program.
ALLIES is a postdoctoral training program in artificial intelligence led by the Spanish National Research Council, coordinated through the AIHUB Connection, and co-funded by the European Union. The aim is to recruit 17 postdoctoral researchers keen to conduct interdisciplinary and intersectoral research in Artificial Intelligence aligned with the Sustainable Development Goals. The Seminar describes the organization of the selection process, the composition of committees, the 4 phases, etc.
Seminar Evaluation Project ALLIES COFUND 101126626 HORIZON-MSCA
Participants: Lissette Lemus is the Executive Coordinator for the AIHUB connection and coordinator of the ALLIES Program with the following guests:
The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called Operational Design Domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalize well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these. But we must do more, because traditional safety engineering only addresses the known hazards. We must design our systems to detect novel hazards as well. We adopt Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilize the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations, and so on. Traditionally, it has been the human operators and managers who have provided these stabilizing controls. Are there ways that AI methods, such as novelty detection, near-miss detection, diagnosis and repair, can be applied to help the human organization manage these disturbances and maintain system safety?
Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 220 refereed publications and two books. He is the recipient of the 2024 IJCAI Award for Research Excellence. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.
Dietterich has devoted many years of service to the research community and was recently given the ACML Distinguished Contribution and the AAAI Distinguished Service awards. He is a former President of the Association for the Advancement of Artificial Intelligence and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently oversees the Computer Science categories at arXiv.
Dietterich spent a sabbatical year at the IIIA in 1998-99.
This seminar explores the intersection of game theory and computer science in the study of teamwork. Traditional approaches in these fields perceive teams either as cooperative units with shared goals or as groups of self-interested agents where cooperation is necessary for individual success. However, these models fall short in capturing the nuances of human teamwork, where collaboration is not always in everyone's interest and binding agreements are often unattainable.
A novel game-theoretical model will be introduced, making collaboration in teamwork optional, allowing for the assessment of team outcomes, and letting players decide their level of engagement. Additionally, a multiagent multi-armed bandit (MA-MAB) framework will be presented, where agents empirically learn strategic behaviour approximating the Nash Equilibrium. The analysis will demonstrate how agents exhibit human-like behaviour patterns, considering the impact of team heterogeneity, task typology, and assessment difficulty on their strategies.
Alejandra López de Aberasturi is a PhD student in Artificial Intelligence at IIIA-CSIC, with research interests in Reinforcement Learning (RL) and the application of agent-based models to tackle human challenges such as resolving moral dilemmas. She holds a Master's degree in Artificial Intelligence from the Polytechnic University of Catalonia and a Master's degree in Physics and Mathematics applied to biology from the University of Granada. She also has a Bachelor's degree in Physics from the University of Granada.
PROGRAMA:
En aquest seminari/taula rodona, tindrem amb nosaltres a dos pesos pesants de la IA al nostre païs: la Carme Torras i el Ramon López de Mántaras. Moderats pel Jordi Sabater-Mir, posaran a "judici" les IA generatives. En una xerrada distesa, intentarem posar sobre la taula els aspectes positius així com els aspectes que no ho són tant d'aquestes tecnologies.
Carme Torras (www.iri.upc.edu/people/torras) és matemàtica, doctora en informàtica i professora d’investigació a l’Institut de Robòtica i Informàtica Industrial (CSIC-UPC), on dirigeix un grup de recerca en robòtica assistencial. Ha rebut els Premis Nacionals de Recerca, tant el català com l’espanyol, per les seves contribucions a la robòtica intel·ligent i social. Compromesa amb la promoció de la tecnoètica, les seves novel·les La mutació sentimental i Enxarxats —ambdues premis Ictineu— i el recull de relats Estimades màquines aborden dilemes ètics suscitats pels assistents robòtics, la IA i les xarxes socials. És vicepresidenta del Comitè d’Ètica del CSIC i membre de l’Observatori d’Ètica en Intel·ligència Artificial de Catalunya. @CarmeTorras_ROB
Ramon López de Mántaras i Badia és informàtic i físic, professor d'Investigació emèrit del Consell Superior d'Investigacions Científiques (CSIC), fundador i ex-director de l'Institut d'Investigació en Intel·ligència Artificial i professor extern de la University of Technology Sydney i de la Western Sydney University. El novembre de 2016 fou nomenat membre de la secció de Ciències i Tecnologia de l'Institut d'Estudis Catalans i el 2018 va rebre el guardó Premi Nacional Julio Rey Pastor del Ministerio de Ciencia, Innovación y Universidades. Actualment dedica bona part del seu temps a la tasca de divulgador científic especilaitzat en la IA.
This seminar will be in Catalan, although the slides they will use will be in English.
One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.
Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals.
His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).
Trust is a fundamental aspect of human relationships. It is based on an interplay of different concepts, such as reliance, confidence, and belief in the reliability and integrity of a person or system. Trustworthiness, therefore, encompasses qualities such as honesty, consistency, and competence, which convey a sense of security and certainty in every interaction.
In the interaction between humans and artificial intelligence, trust plays a central role in shaping the dynamics between individuals and artificial intelligence systems. In this context, it is about the reliability of AI in terms of expected performance, the transparency of its processes, and the ethical considerations that guide its decision-making processes. Just as trust is crucial in interpersonal relationships, it is equally important in fostering positive interactions between humans and AI.
In this talk, I will present the concepts of trust and trustworthiness in combination with AI and showcase some research projects on human-AI interaction with regard to diverse topics such as sustainability, education, and medicine.
In the study of symbolic reasoning, there are various formalisms to represent and model the dynamics of knowledge, among which AGM stands out, introducing belief contraction and revision operators. These operators model the basic decision-making of an agent upon receiving a new observation. When revision is applied, the agent believes in the new observation, or equivalently, disbelieves its negation. Meanwhile, contraction makes the new observation uncertain: if originally the agent consistently believed in the observation, then neither it nor its negation is believed after the contraction. Revision can thus be seen as an operator seeking certainty (either belief or disbelief), and contraction as an operator that pursues doubt (an unsettled state).
In this presentation, we introduce a new family of operators for belief change called moderated revision. This proposal aims to incorporate a feature of balance between certainty and complete doubt. We propose this as a more adequate and general model for belief change, where revision and contraction represent extreme situations. It encompasses a diverse family tree with over 20 subfamilies, including known operators, some adaptations, and novel properties. Among them, wise and imprudent operator families emerge, reflecting responses based on the strength of the new observation: wise operators tend to doubt strong affirmations, while imprudent operators tend towards unwarranted certainty.
Daniel Grimaldi holds a Licentiate degree in Mathematical Sciences from the University of Buenos Aires and is currently completing his PhD in Computer Science at the same institution under the advisor of Prof. Dr. Vanina Martinez and Prof. Dr. Ricardo Rodriguez. Over the past four years, he has been actively engaged in research with the Logic, Language, and Computability Research Group, in addition to participating in seminars with the Buenos Aires Logic Group. This interdisciplinary experience has equipped him with a versatile skill set ideal for research in Belief Change Theory and Knowledge Representation and Reasoning. He has already made contributions to the fields, with publications in IJCAI and KR conferences, as well as in the IJAR journal. Currently, he serves as a full-time teaching assistant in the Department of Computer Science and as a researcher in training in the Institute of Computer Sciences of UBA/CONICET.
We use the previous results to characterize finitely generated projective algebras in the two varieties, which turn out to be exactly the finitely presented algebras. From the point of view of the associated logics, via Ghilardi algebraic approach to unification problems [3], this implies that their unification type is (strongly) unitary: there is always a best solution to a unification problem, and it is represented algebraically by the identity homomorphism; this is in parallel to the case of product algebras and DLMV-algebras studied in [1]. The study of unification problems is strongly connected to the study of admissible rules (or, in the algebraic setting, admissible quasiequations); a rule is said to be admissible in a logic if every substitution that makes the premises a theorem of the logic, also makes the conclusion a theorem of the logic.
As a consequence of our results, we get that the logics associated to both product hoops and DLW-hoops are structurally complete, i.e. the admissibility of rules coincides with their derivability; using results in [2], we can actually conclude that the two logics are universally complete, that is, admissibility coincides with derivability also for multiple-conclusion rules.
References
[1] Aglianò, P., Ugolini, S.: Projectivity and unification in substructural logics of generalized rotations. Interna- tional Journal of Approximate Reasoning 153, 172–192, (2023).
[2] Aglianò, P., Ugolini, S.: Structural and universal completeness in algebra and logic. Submitted, (2023). arXiv:2309.14151
[3] Ghilardi S.: Unification through projectivity, J. Logic Comput. 7, 733–752, (1997).
The main aim of this part is to deepen the understanding of the variety of product algebras P and the variety DLMV generated by perfect MV-algebras, investigating in particular the role of the falsum constant 0. As one of the main outcome of this work, we go back from hoops to the corresponding 0-bounded varieties, and we exhibit the free functor from the varieties of hoops of interest to the corresponding 0-bounded varieties. In other words, we show a construction that freely adds the falsum constant 0: starting from a product hoop (or a DLW-hoop) we obtain the product algebra (DLMV-algebra) freely generated by it.
The construction for DLW-hoops is shown to coincide with the MV-closure introduced in [1].
References
[1] Abad, M.,Castano, D., Varela, J.: MV-closures of Wajsberg hoops and applications. Algebra Universalis 64, 213–230, (2010).
Lattice-ordered abelian groups, or abelian l-groups in what follows, are categorically equivalent to two classes of 0-bounded hoops that are relevant in the realm of the equivalent algebraic semantics of many-valued logics: liftings of cancellative hoops and perfect MV-algebras. The former generate the variety of product algebras, and the latter the subvariety of MV-algebras generated by perfect MV-algebras, that we shall call DLMV. In this seminar we focus on these two varieties and their relation to the structures obtained by forgetting the falsum constant 0, i.e., product hoops and DLW-hoops.
A first main result is a characterization of the free algebras over an arbitrary set of generators in the two varieties of product and DLW-hoops; the latter are obtained as particular subreducts of the corresponding free algebras in the 0-bounded varieties. More precisely, we obtain a representation in terms of weak Boolean products of which we characterize the factors. This kind of description for 0-bounded residuated lattices is present in the literature, but we are not aware of analogous results for varieties of residuated structures with just the constant 1.
We observe that in a variety that is the equivalent algebraic semantics of a logic, free (finitely generated) algebras are isomorphic to the Lindenbaum-Tarski algebras of formulas of the logic; thus their study is important from both the perspective of algebra and logic.
Giga is a collaborative project between UNICEF and the ITU, focusing on bringing internet connectivity to educational institutions. A critical need for the team is accurate data on school locations. Traditional methods of collecting this data are resource-intensive and challenging, especially in remote areas. To address these challenges, we are advancing techniques that combine high-resolution satellite imagery with computer vision to enhance data gathering. While these methods have shown success in specific countries, such as Sudan, further advancements are needed to create scalable solutions.
Dr. Dohyung Kim serves as the Data Science Lead for the Giga initiative at UNICEF, where he spearheads the integration of machine learning (ML) and earth observation data. He holds a PhD in Geographical Sciences from the University of Maryland, specializing in remote sensing for global forest monitoring. Prior to his role at UNICEF, Dr. Kim contributed his expertise as a postdoctoral researcher at NASA and has also worked with various international organizations, including the United Nations Environment Programme (UNEP) and the World Bank.
Durante esta charla, relataré mi experiencia personal como investigadora en el área de la IA, resaltando proyectos de investigación científica y aplicada, antecedentes de los programas de posgrado de mi universidad y de qué forma se podrían generar colaboraciones institucionales. Además se analizará la evolución de la Inteligencia Artificial en Chile y la situación actual, prestando especial atención a los desafíos, como la regulación, y las oportunidades que se nos presentan en este campo en constante cambio.
Carola Andrea Figueroa Flores es profesora Asistente A en el Departamento de Ciencias de la Computación y Tecnologías de la Información de la Universidad del Bío-Bío, ha desempeñado un papel activo como investigadora, directora y co investigadora en diversos proyectos interdisciplinarios con un enfoque aplicado y financiados por la Agencia Nacional de Investigación y Desarrollo (ANID) de Chile. Su formación académica incluye un PhD in Computer Science de la Universidad Autónoma de Barcelona, un Magíster en Ciencias de la Computación de la Universidad de Concepción, así como una Licenciatura en Ciencias de la Informática y un título de Ingeniero Civil en Informática de la Universidad del Bío-Bío. Su investigación se enfoca en la aplicación de la Inteligencia Artificial a diversos desafíos sociales, abarcando áreas como la salud, agricultura, medio ambiente, transporte y finanzas, utilizando técnicas como el aprendizaje automático, visión por computadora, procesamiento del lenguaje natural, ciencia de datos y reconocimiento de patrones. Los resultados de sus investigaciones han sido publicados en destacadas revistas científicas internacionales. Además, ha participado en numerosas conferencias a nivel nacional e internacional, así como en conversatorios y charlas para difundir el impacto y las oportunidades de la IA a la comunidad y hospitales de las regiones de Bío-Bío y Ñuble. También, es parte de importantes iniciativas relacionadas con la prevención y control de incendios forestales en la Región del Bío-Bío, así como de la mesa de trabajo sobre Inteligencia Artificial en el Senado y la biblioteca del Congreso Nacional, contribuyendo activamente en la elaboración de políticas públicas en este campo.
Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.
Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.
Product logic is, together with Łukasiewicz logic and Gödel logic, one of the fundamental logics in Hájek’s framework of fuzzy logics arising from a continuos t-norm. In algebraic terms, this means that product algebras are one of the most relevant subvarieties of BL-algebras. From the algebraic perspective, representations of product algebras have mostly highlighted their connection with cancellative hoops; however, not much is known about the relation between product algebras and the variety of residuated lattices that constitutes their 0-free subreducts: product hoops. This contribution focuses on two main results: 1) we prove that product hoops coincide with the class of maximal filters of product algebras, seen as residuated lattices; 2) we show a construction that given any product hoop H obtains a product algebra freely generated by H; in terms of the corresponding algebraic categories, we exhibit the free functor, i.e. the left adjoint to the forgetful functor from product algebras to product hoops.
In this talk I will present the results of my undergraduate thesis on Belief Revision. I will present the weak-ensconcement: a non-prioritized order-based constructive framework for building a contraction operator. This operator characterizes an interesting family of Shielded base contractions. In turn, this characterization induces a class of AGM contractions satisfying certain postulates. Finally, I will show a connection among the class of contractions given by the weak ensconcement and other kinds of belief base contraction operators. In doing so, I will also point out a flaw I discovered in the original theorems that link the epistemic entrenchment with ensconcement (which are well established in the literature), and then I'll provide two possible solutions.
Alejandro Joaquin Mercado: "I am 23 years old and I've recently obtained my undergraduate degree in Computer Science at the University of Buenos Aires. Last year I won an scholarship to learn about Belief Revision and write my final thesis on that subject under the supervision Prof. Ricardo Rodriguez. Doing such research I've obtained intriguing results, which we then submitted as a paper at KR which was accepted for the conference's main track. Currently, I'm in the look start a PhD. My current interest is on Safe and Explainable AI. I'm curious about mixing my formal background in logic and formal proof with machine learning algorithms in order to build more explainable models".
The Barcelona Supercomputing Center (BSC) has recently started a new program on Computational Social Sciences that aims to create and provide support to new projects that combine research in the social sciences with the analysis of large amounts of data or artificial intelligence that usually would require high-performance computing. This talk will present the new program and its objectives.
Mercè Crosas is a researcher at the Barcelona Supercomputing Center (BSC) expert in data science, data management, and open data and FAIR data (Findable, Accessible, Interoperable, Reusable). From early 2023, Crosas is the Head of Computational Social Sciences Program at the BSC, a new program that aims to facilitate the use of data and computing in social science and humanities research, and to create and support computational studies in these areas.
Before this position, Crosas was the Secretary of Open Government at the Generalitat de Catalunya from 2021 to 2022, a high-ranked government position responsible for open data, transparency, and citizen participation. Most of her professional career has been spent at Harvard University, eventually as Chief Data Science and Technology Officer at the Institute for Quantitative Social Sciences and University Research Data Management Officer. She has also worked in the development of data systems in biotechnology^ companies and has conducted research and built scientific software in astrophysics at the Harvard-Smithsonian Center for Astrophysics. Crosas holds a doctorate in Astrophysics from Rice University and a degree in Physics from the University of Barcelona.
We introduce and investigate a family of non-monotonic consequence relations which are motivated by the goal of capturing important patterns of scientific inference.
Esther Anna Corsi: I am a PostDoc in the Logic Group in the Department of Philosophy of the University of Milan. I obtained my PhD in Computer Science at the TU Wien, Theory and Logic Group in the Institute of Computer Languages under the supervision of Chris Fermüller.
The challenge is to propose approaches which can solve spatial tests used to measure humans’ intelligence. On one side, we can apply these approaches in smart systems (i.e. computer games, robots) so that they can improve their spatial thinking. On the other side, we can use these approaches to help improve humans’ spatial thinking by providing them with useful feedback. I will present and discuss two cases: (1) the cube rotation test and (2) the paper folding-and-punching test.
Zoe Falomir Llansola: Currently, I am a Ramon-y-Cajal fellow at Universitat Jaume I (UJI), Castellón, Spain. Before that, I have been a postdoc researcher for 7 years at the Spatial Cognition Center, at the University of Bremen, Germany, where I was principal investigator in projects bridging the sensory-semantic gap.I am a doctor engineer in computer science. I got my joint PhD title by UJI, Spain (Dr.) and also by University of Bremen (Dr.-Ing.). I also carried out research transfer to industry at Cognitive Robots SL where I applied results of my PhD thesis to the automation of mosaic assembling and I got the Castellón City Award for Experimental Sciences and Technology for this work. At the moment I am developing reasoning algorithms to solve spatial reasoning challenges and testing them in videogames which can be used to train people's skills. We intend to transfer these applications to education institutions in the near future. My research expertise lies in Qualitative Reasoning, Knowledge Representation techniques, Human-Machine Interaction, Machine Learning, Colour Cognition, Bioinformatics, Geographic Information Systems and Creative and Spatial Problem Solving.
Nicolás Copérnico fue el astrónomo que cambió la concepción geocéntrica por la heliocéntrica: el Sol estaba en el centro y los planetas, la Tierra entre ellos, giraban en torno a él. Este año se cumple el 550 aniversario de su nacimiento, y es una buena ocasión para recordarlo. Con este espíritu, propongo dar un repaso a la biografía de Copérnico, con énfasis en su formación y sus cautelas para publicar el nuevo modelo planetario. Naturalmente, me detendré en su trascendental contribución científica, y el impacto que tuvo en astrónomos posteriores —Kepler, Galileo— hasta llegar a Newton. Por último, repasaré el impacto que este cambio tuvo en la concepción del mundo en las mentes
Pedro Meseguer es investigador científico del CSIC en el IIIA. Licenciado en ciencias físicas y en informática, se doctoró en la UPC en 1992. Su trabajo de investigación se ha centrado en razonamiento con restricciones y en búsqueda heurística, temas sobre los que ha realizado numerosas publicaciones. Ha participado en diversos proyectos de investigación, y ha desarrollado tareas editoriales en revistas especializadas. En paralelo a su trabajo de investigación, ha realizado varias tareas docentes, de dirección de tesis doctorales y de servicio a la comunidad de IA. Es EurAI fellow.
This presentation provides an overview of paper assignment processes within the context of large conferences. We will expose the algorithms and techniques employed in computing assignments in last editions of selected conferences, highlighting their effectiveness and the problems. Furthermore, we will discuss the various threats we aim to mitigate in this process, while also exploring the key factors contributing to reviewer satisfaction and also looking at the instances of dissatisfaction. Finally, we will present some ideas that can help the assignment process in the future.
Francisco Cruz: "I had the opportunity to join the IIIA-CSIC staff back in year 2k, during my years at the IIIA-CSIC I started to collaborate with different organizations such as IJCAI in 2011 where I started with a small role. Nowadays I am fully employed by IJCAI and have two companies providing different services to large conference organizations such as IJCAI, AAAI, and ECAI. Throughout the past year, my focus has been on developing a software tailored to assist conference organizers in streamlining their review processes.We aim to enhance the efficiency and effectiveness of the review process."
Real setups pose significant challenges for modern Deep Reinforcement Learning algorithms; agents struggle to explore high-dimensional environments and have to provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to interact while acting asynchronously through temporally extended actions. I will present our work on fostering diversified exploration and safety in real domains of interest. We tackle these problems from different angles, such as (i) using Evolutionary Algorithms as a natural way to foster diversity; (ii) leveraging Formal Verification to characterize policies' decision-making and designing novel safety metrics to optimize; (iii) designing macro-action-based algorithms to learn coordination among asynchronous agents.
Enrico Marchesini is a Postdoctoral research associate in the Khoury College of Computer Sciences at Northeastern University, advised by Christopher Amato. He completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of Deep Reinforcement Learning. For this reason, he is designing novel algorithms for multi-agent systems while promoting efficient exploration and safety in asynchronous setups.
In this talk, we explore the emergence of collaboration in multiagent reinforcement learning (MARL) through various examples, highlighting the role of human biases such as loss aversion and their impact on collaborative dynamics. I will then focus on a recent paper co-authored by Ricard Sole and Clement Moulin-Frier (https://arxiv.org/abs/2212.02395), in which we model adaptive dynamics in complex ecosystems. Building on the Forest-Fire model, the study uses fire as an external, fluctuating force in a simulated environment. Agents must balance tree harvesting and fire avoidance, resulting in the evolution of an ecological engineering strategy that optimally maintains biomass while suppressing large fires. We will discuss the implications of these findings for AI management of complex ecosystems, emphasizing the potential benefits of incorporating MARL and collaboration strategies into environmental management and conservation efforts.
Martí Sánchez-Fibla is a Tenure Track researcher at UPF and has recently been granted a research scientist position at IIIA, CSIC. He is currently leading a research industrial project red.es (http://red.es/) with the company BMAT and he has previously been principal investigator of the Plan Nacional INSOCO. His research is focused on the areas of Constraint Optimization (inference and search algorithms for problem solving), and NeuroRobotics (cognitive architectures, sensorimotor learning, adaptability), and Complex Systems.
In this presentation, I will provide a concise overview of my current research focused on molecular design in the small data regime. Specifically, I will discuss the importance of developing reliable predictive models that effectively quantify uncertainty, which is crucial for effective molecular design. One promising technique for achieving this goal is active learning through Bayesian Optimization. This will motivate the second part of my talk, where I will introduce a novel simulation-based approach for Bayesian Optimization, which has the potential to improve the efficacy of molecular design. I will analyze the convergence issues related to this approach and present empirical evidence of its effectiveness.
Roi Naveiro currently holds a Tenure Track Assistant Professor position at CUNEF Universidad. He is BSc in Physics from the University of Salamanca, MSc in Theoretical Physics and PhD in Statistics and Operations Research from the Complutense University of Madrid. His work focuses on probabilistic machine learning, Bayesian statistics and decision theory, as well as their applications to problems related to drug Discovery and materials design, among others. Naveiro has published more than 10 articles in international journals and one book. He has participated in more than five national and European research projects, and has been principal investigator in three projects with industry. In addition, he actively collaborates with AItenea Biotech, a spin-off from the Spanish National Research Council’s focused on molecular design. He has been visitor at Duke University and the Statistical and Applied Mathematical Sciences Institute (Durham, North Carolina, USA).
In this talk decision-making and temporal tasks inspired by Boolean functions are analyzed, exploring connectivity patterns, dynamics, and biological constraints of recurrent neural networks (RNNs) after training. The focus of such models in Computational Neuroscience is brain regions such as the cortex and prefrontal cortex and their recurrent connections related with different cognitive tasks. Understanding the dynamics behind these models is crucial for building hypotheses about brain function and explaining experimental results.
Dynamics is analyzed through numerical simulations, and the results are classified and interpreted. The study sheds light on the multiplicity of solutions for the same tasks and the link between the spectra of linearized trained networks and the dynamics of their counterparts. The distribution of eigenvalues of the recurrent weight matrix was studied and related to the dynamics in each task. Approaches and methods based on trained networks are presented. The importance of having a software framework that facilitates testing different hypotheses and constraints is also emphasized.
Cecilia Jarne did her PhD in physics at the IFLP and the Physics Department of the National University of La Plata and a PostDoc at the Buenos Aires University Physics department (IFIBA). Her research experience is based on large data sets analysis, programming and modelling, first in high-energy cosmic ray physics and then during her postdoctoral research analyzing bird songs and dynamics. Currently, she is an assistant researcher and Professor at Universidad Nacional de Quilmes and CONICET working on Recurrent Neural Networks and Complex Systems since 2018. During 2023 she is doing a research stay at the CFIN in Aarhus, Denmark.
Graph databases are becoming widely successful as data models that allow to effectively represent and process complex relationships among various types of data. Data-graphs are particular types of graph databases whose representation allows both data values in the paths and in the nodes to be treated as first class citizens by the query language. As with any other type of data repository, data-graphs may suffer from errors and discrepancies with respect to the real-world data they intend to represent. In this talk, we explore the notion of probabilistic unclean data-graphs in order to capture the idea that the observed (unclean) data-graph is actually the noisy version of a clean one that correctly models the world, but that we know of partially. As the factors that yield to such observation heavily depend on the application domain and may be the result of different types of clerical errors or unintended transformations of the data, we consider an epistemic probabilistic model that describes the distribution over all possible ways in which the clean (uncertain) data-graph could have been polluted. Based on this model, we study data cleaning and probabilistic query answering for this framework and present complexity results when the transformation of the data-graph is caused by either removing (subset), adding (superset), or modifying (update) nodes and edges.
Dr. Nina Pardal is currently a Research Associate at the Department of Computer Science of the University of Sheffield, in the UK. She obtained a PhD in Mathematics from the University of Buenos Aires, Argentina, and a PhD in Computer Science from the University Paris-Nord, France. Her research interests lie in the areas of Graph Theory, Logic and Computability, Complexity, and Knowledge and Reasoning.
Much is being said about how knowledge representation and reasoning models are supposed to play a very important part in the future development of Intelligent Systems. In this talk, I will argue that they can serve as the founding formal structure for the construction of such systems in complex (multi-agent) settings. I will show part of my research trajectory in formal modelling of knowledge dynamics, representing and reasoning with inconsistent knowledge, and how these concepts are at the core of improving Intelligent Systems' reasoning capabilities.
Dr Maria Vanina Martinez obtained her PhD at the University of Maryland College Park and pursued her postdoctoral studies at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is a Ramon and Cajal Fellow at the Artificial Research Institute IIIA-CSIC in Barcelona, Spain. Her research is in the area of knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.
Engineers have been dealing with massive amounts of data accumulated over decades of fundamental experiments and field measurements, vitalized in the form of cleverly organized charts, tables and heuristic laws. In the last few decades, our capability to generate data has increased even further with the developments in (i) the digital measurement techniques including sensing technologies, (ii) computational power, (iii) faster, easier and cheaper data transfer and storage and (iv) post-processing tools and algorithms. On the other side, the problems that are needed to be addressed today, such as the food-water-energy security, pandemics and diseases, or global warming, are massive and at a completely different scale. More drastically, we have comparably much less time to find sustainable solutions. Therefore, we need a paradigm shift in how to interpret the data we collect and solve our problems, which can speed up our hypothesis test cycle. In this talk, we will visit some case studies relevant to the energy problem and discuss how the expertise of AI specialists can tip the scales in our favour.
Cihan is a junior research group leader at KIT, under the Institute of Thermal Turbo Machinery, Multiphase Flow & Combustion group. With his group, he is working on the design and optimization of energy intensive processes. He is also a PI at the Graduate School Computational and Data Science and KIT Emerging Field of Health Technologies.
Over the last decade, the research on autonomous vehicles (AVs) has made revolutionary progress, which brings us hope of safer, more convenient, and more efficient means of transportation. Most significantly, the advance of artificial intelligence (AI), especially machine learning, allows a self-driving car to learn and adapt to complex road situations with millions of accumulated driving hours, which are way higher than any experienced human driver can reach. However, autonomous vehicles on roads also introduce new challenges to traffic management, especially when we allow them to travel mixed with human driving vehicles.
New theories for better understanding of the new era of transportation and new technologies for smart roadside infrastructures and intelligent traffic control are crucial for development and deployment of autonomous vehicles. This presentation will discuss some of these challenges, especially the social aspects of autonomous driving, including interaction between autonomous vehicles and roadside infrastructures, mechanisms of traffic management, the price of anarchy in road networks and automated negotiation between vehicles.
Dongmo Zhang is an Associate Professor in Computer Science and Associate Dean Graduate Studies in School of Computer, Data and Mathematical Sciences at Western Sydney University. He is a leading researcher in Artificial Intelligence, working in a wide range of areas, including multi-agent systems, strategic reasoning, automated negotiation, belief revision, reasoning about action, auctions, trading agent design etc. He has published around 150 papers in international journals and conferences, including the top AI Journals, such as AIJ, AAMAS & JAIR, and the top AI conferences, such as IJCAI, AAAI & AAMAS. He has been an area chair, senior PC or PC for many top AI conferences, IJCAI, AAAI, ECAI, PRICAI, AJCAI, AAMAS, KR&R etc. He and his research team have also received several international awards, such champions of Trading Agent Competitions and best paper awards.
The connection between substructural logics and residuated lattices is one of the most relevant results of algebraic logic. Indeed, it establishes a framework where different systems, or equivalently, classes of structures, can be both compared and studied uniformly. Among the most well-known connections among different structures in this framework surely stands Mundici’s theorem, which establishes a categorical equivalence between the algebraic category of MV-algebras and lattice-ordered abelian groups (abelian l-groups in what follows) with strong order unit (an archimedean element with respect to the lattice order), with unit preserving homomorphisms. This equivalence, connecting the equivalent algebraic semantics of infinite-valued Lukasiewicz logic (i.e., MV-algebras) with ordered groups, has been deeply investigated and also extended to more general structures.
Alternative algebraic approaches to Mundici’s functor have been proposed by other authors. In the present contribution we re-elaborate Rump’s work, which is inspired by Bosbach’s idea, and focuses on structures with only one implication and a constant (whereas Bosbach’s cone algebras have two implications). The key idea is to characterize which structures in this reduced signature embed in an l-group. We find conditions that are different (albeit equivalent) to the ones found by Rump, and moreover we extend some of Rump’s constructions to categorical equivalences of the algebraic categories involved.
Valeria Giustarini is a Master Student at the Department of Information Engineering and Mathematics, University of Siena.
This is a specialized seminar organized by the Logic department. If you want to participate in this seminar, please contact with Tommaso Flaminio <tommaso@iiia.csic.es>
The seminar has two parts. The first will be from 10:00 to 12:00 and the second from 14:00 to 16:00.
As artificial agents become increasingly embedded in our society, we must ensure that they align with our human values, both at a level of individual interactions and system governance. However, agents must first be able to infer our values, i.e., understand how we prioritize values in different situations, both as individuals and as society. In this talk we explore how artificial agents can infer our values, while helping us reason about them. How can artificial agents understand our deepest motivations, when we are often not even aware of them?
Enrico Liscio is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. His research focuses on Natural Language Processing techniques to estimate human values from text. His work is part of the project to achieve high-quality online mass deliberation, creating AI-supported tools and environments aimed at transforming online conversations into more constructive and inclusive dialogues.
Properties of coercion resistance and voter verifiability refer to the existence of an appropriate strategy for the voter, the coercer, or both. One can try to specify such properties by formulae of a suitable strategic logic. However, automated verification of strategic properties is notoriously hard, and novel techniques are needed to overcome the complexity.
I will start with an overview of the relevant properties, show how they can be specified, and present some new results for model checking of strategic properties.
Wojtek Jamroga is an associate professor at the Polish Academy of Sciences and a research scientist at the University of Luxembourg. His research focuses on modeling, specification and verification of interaction between agents. He has coauthored over 100 refereed publications, and has been a Program Committee member of most important conferences and workshops in AI and multi-agent systems. His research track includes the Best Paper Award at the main conference on electronic voting (E-VOTE-ID) in 2016, and a Best Paper Nomination at the main multi-agent systems conference (AAMAS) in 2018.
Este workshop se enmarca en un ciclo denominado «AIHUB Research Methodology Training» (Metodologías de investigación AIHUB) que brinda oportunidades de formación en metodologías de investigación relacionadas con la IA, la robótica y la ciencia de datos al personal pre y postdoctoral en formación. En el primer workshop se explorará el uso de métodos de investigación cualitativos en la investigación de la interacción humano-robot con el fin de mejorar el diseño y la comprensión del sistema socio-técnico emergente.
Miquel Domènech es Profesor Titular de Psicología Social en la Universitat Autónoma de Barcelona. Es miembro fundador y coordinador del Barcelona Science and Technology Studies Group (STS-b), grupo de investigación reconocido por la Generalitat de Cataluña. Sus intereses de investigación se enmarcan en el campo de los estudios de la ciencia y la tecnología, con un énfasis especial en las temáticas relacionadas con el uso de la tecnología en los procesos de cuidado y en la participación ciudadana en asuntos tecnocientíficos.
Núria Vallès Peris es socióloga, investigadora del grupo Barcelona Science and Technology Studies Group (STS-b) de la UAB. Actualmente investigadora postdoctoral en el Intelligent Data Science and Artificial Intelligence Research Center (IDEAI) de la UPC. Su aproximación se enmarca en los estudios de la ciencia y la tecnología, y la filosofía de la tecnología. Su investigación se ha focalizado en las controversias éticas, políticas y sociales en torno a la robótica y la inteligencia artificial, especialmente en el ámbito de la salud y los cuidados. Está interesada en el estudio de los imaginarios, el diseño de las tecnologías y los procesos de democratización de la tecnociencia.
It is undenyable that more and more hard and complex procedures are being automated with the aid of artificial intelligence, having led to an era where artificial intelligence can be practically found in any system. As such, it is more and more common that people make decisions guided by the suggestions and recommendations of some intelligent system. As these systems support everyday life’s decisions they unavoidably make people curious about their functionality.Thus, the need for humans to understand the rationale behind AI decisions becomes imperative.
Adequate explanations for decisions made by an intelligent system do not just help describing how the system works, they also earn users’ trust. In this work we focus on a general methodology for justifying why certain teams are formed and others are not by a team formation algorithm (TFA). Specifically, we introduce an algorithm that wraps up any existing TFA and builds justifications regarding the teams formed by such TFA. This is done without modifying the TFA in any way. Our algorithm offers users a collection of commonly-asked questions within a team formation scenario and builds justifications as contrastive explanations. We also report on an empirical evaluation to determine the quality of the explanations provided by our algorithm.
Athina Georgara is currently a PhD candidate in Autonoma Unoversity of Barcelona in collaboration with the Artificial Intelligence Research Institute under the supervision of professors Carles Sierra and Juan A. Rodríguez-Aguilar. Her PhD studies are funded by the consulting company Enzyme Advising Group, where she is employeed during her studies. Athina completed her undergraduate studies and acquired a diploma degree at the school of Electrical and Computer Engineering in Technical University of Crete, and she acquired an M. Sc. in Electronic and Computer Engineering in the same school under the supervision of associate professor Georgios Chalkiadakis.
The scope of her research lies on team formation and task allocation. She works towards automating the process of forming efficient teams for assigning them to tasks combining findings from organisational psychology and social sciences. Due to her prior engagement on the fields Athina also holds interest on Algorithmic Game Theory and Machine Learning, along with their implementation in Multi-agent Systems.
Dealing with the challenges of an interconnected globalised world requires to handle plurality. This is no exception when considering value-aligned intelligent systems, since the values to align with should capture this plurality. So far, most literature on value-alignment has just considered a single value system. Thus, in this talk I will discuss a method for the aggregation of value systems. By exploiting recent results in the social choice literature, we formalise our aggregation problem as an optimisation problem, more concretely, as an ℓp-regression problem. Moreover, our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness).
Roger Lera finished the BSc in Physics in the University of Barcelona in 2020. He is currently a Ph.D. student at the Artificial Intelligence Research Institute (IIIA-CSIC) in Bellaterra, Spain. His research interest are ethics & AI, Explainable AI and combinatorial optimisation problems for real-world applications.
In the present seminar, we present a class of algebras obtained by adding a normal modality to Boolean algebras for conditionals so as to provide an algebraic setting for the logic C1 for counterfactual conditionals, axiomatized by Lewis. These modal algebras, that we name “Lewis algebras”, are particular Boolean algebras with operators and, as such, allow a dual relational counterpart that will be called Lewis frames. The main results of this paper show that: (1) Lewis algebras and Lewis frames provide a sound semantics for Lewis logic C1; (2) Lewis’ original sphere semantics for counterfactuals can actually be defined from Lewis frames, and hence, from Lewis algebras. Finally, we will present a new logic for counterfactuals that, taking inspiration from the definition of Lewis algebras, is obtained as a modal expansion of the recently introduced logic LBC to reason about Boolean conditionals.
NOTE: This is an specialized seminar. If you want to attend this seminar, please contact Tommaso Flaminio (tommaso@iiia.csic.es).
I propose to peek into the possibility of using moral values as a device to harness the autonomy of artificial systems. The talk should outline the challenge of developing a theory of values that has a distinctive AI bias: its motivation, the foundational questions, the distinctive features, the potential artefacts, the methodological challenges, and the practical consequences of such a theory. Fortunately for everyone, it will not. The talk will only look into a restricted understanding of the problem of embedding values into the governance of autonomous systems. In fact, I will only pay attention to some of the obvious practical problems one needs to overcome if one intends to claim that an autonomous system is aligned with a particular set of values. Hopefully, this timid approach will reveal enough of the breath and beauty of an artificial axiology to justify taking a closer look into it.
Pablo Noriega is a tenured scientist of the IIIA. His main research interest is in the governance of open multiagent systems. This talk reflects recent collaboration with Mark d'Inverno (Goldsmiths, U.of London), Julian Padget (U. of Bath), Enric Plaza (IIIA), Harko Verhagen (Stockholm U.) and Toni Perello-Moragues.
Initially started as a project at EPFL, Switzerland, AIcrowd is a community of ~60,000 AI researchers all over the world, who come together to solve real world problems to win cash prizes, travel grants, co-authorships in research papers. At AIcrowd, we use competitions and benchmarks to build meaningful research communities which can come together while they collaborate and compete to push the state of art in Artificial Intelligence Research. The long term vision is to evolve into a giant distributed research lab, which celebrates community led research, for the community by the community.
Sharada Mohanty is the CEO and Founder of AIcrowd, a platform for crowdsourcing Artificial Intelligence for real world problems. His research focuses on using Artificial Intelligence for diagnosing plant diseases, teaching simulated skeletons how to walk, scheduling trains in simulated railway networks, and on AI agents which can perform complex tasks in Minecraft.
He is extremely passionate about benchmarks and building communities. He has led the design and execution of many large-scale machine learning competitions and benchmarks, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, NeurIPS 2019: MineRL Competition, NeurIPS 2019: Disentanglement Challenge, NeurIPS 2020: Flatland Competition, NeurIPS 2020: Procgen Competition, NeurIPS 2021 NetHack Challenge, to name a few.
During his Ph.D. at EPFL, he worked on numerous problems at the intersection of AI and health, with a strong interest in reinforcement learning. In his previous roles, he has worked at the Theoretical Physics department at CERN on crowdsourcing compute for PYTHIA powered Monte-Carlo simulations; he has had a brief stint at UNOSAT building GeoTag-X, a platform for crowdsourcing analysis of media coming out of disasters to assist in disaster relief efforts. In his current role, he focuses on building better engineering tools for AI researchers and making research in AI accessible to a larger community of engineers.
With the availability of large datasets and ever-increasing computing power, there has been a growing use of data-driven Artificial Intelligence systems, which have shown their potential for successful application in diverse domains related to social platforms. However, many of these systems are not able to provide information about the rationale behind their decisions to their users. Lack of understanding of such decisions can be a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior in social platforms is a clear example. This phenomenon has many faces, which for instance appear in the form of bots, sock puppets, creation and dissemination of fake news, Sybil attacks, and actors hiding behind multiple identities. In this talk, we discuss HEIST (Hybrid Explainable and Interpretable Socio-Technical systems), a framework for the implementation of intelligent socio-technical systems that are explainable by design, and study an instantiation for analysis of fake news dissemination.
Dr. Maria Vanina Martinez obtained her PhD at University of Maryland College Park and pursued her postdoctoral studies at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is an adjunct researcher at CONICET as a member of the Institute for Research in Computer Science (ICC, UBA - CONICET) and an assistant professor at the Department of Computer Science at University of Buenos Aires, Argentina. In 2018 was selected by IEEE Intelligent Systems as one of the ten prominent researchers in AI to watch. In 2021 he received the National Academy of Exact, Physical and Natural Sciences Stimulus Award in the area of Engineering Sciences in Argentina. Her research is in the area of knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.
She is a member of the ethics committee of the Ministry of Science and Technology, has participated in various international events organized, among others, by UNESCO, UNIDIR, Pugwash, Sehlac (Human Security in Latin America and the Caribbean), the Campaign to stop killer robots, speaking about the benefits and challenges involved in the advancement of Artificial Intelligence.
Galaxies exhibit a wide variety of morphologies which are strongly related to their star formation histories. Having large samples of morphologically classified galaxies is fundamental to understand their formation and evolution. In this talk, I will review my research related to deep learning algorithms for morphological classification of galaxies which have resulted in the release of morphological catalogues for large international surveys such as SDSS, MaNGA or Dark Energy Survey. I will describe the methodology, based on supervised learning and convolutional neural networks (CNN). The main disadvantage of such approach is the need of large labelled training samples which we overcome by applying transfer learning or by ‘emulating’ the faint galaxy population.
Helena Domínguez Sánchez is a research fellow astrophysicist at Institute of Space Sciences (ICE-CSIC) trying to understand how and why the properties of galaxies have changed across the history of the Universe. During the last years, she has pioneered the use of Deep Learning techniques in astronomy. She did her PhD in Bologna (2009-2012) and the she had several post-docs positions in UCM (Madrid), Paris Observatoire and University of Pennsylvania (USA). She is currently visiting the Instituto de Astrofísica de Canarias (IAC, Tenerife) for a semester and she just accepted a tenure track position at Centro de Estudios de Física del Cosmos de Aragón (CEFCA, Teruel), starting September 2022.
75 years ago the transistor was invented. In hindsight, that moment can be considered the big bang of the Information Society we are living in today. The recent semiconductor crisis has shown how important chips are in our world. However, it is relatively unknown what chip-making entails. Moreover, chips come in many forms. Leveraging on IMB-CNM activities, I would like to show that miniaturization and scalability makes possible not only place chips inside computers and smartphones, but also to deploy microdevices in so demanding and so far apart scenarios as inside living cells and on-board of space missions.
Luis Fonseca has developed his scientific career at the Institute of Microelectronics of Barcelona. Physicist by training he joined IMB-CNM in 1989 as a predoc and he is today its current director. His scientific interests have revolved about micro and nanotechnologies for gas sensing and energy harvesting.
Reward is a foundation of behaviour: we move to attain valuable states. However, moving towards those states implies investing some effort and deploying motor strategies that are very much dependent on the person’s motivation. We performed a decision-making task in with human participants had to accumulate reward by selecting one of two reaching movements of opposite motor cost, to be performed precisely. Our results show that performance and social status were taken into consideration by diminishing error as a function of the partner. This also transpired into an increase movement time between the baseline condition and any social condition. We interpret this as an adaptive process of trade-off between precision, reward and time. Other effects on the movement amplitude became significant when the skill of the companion player was clearly unattainable, such as a reduction of the amplitude, thus escaping the traditional context of the speed-accuracy trade-off. As a context for the study of motivation and motor adaptation we developed a model based on movement benefit and costs optimization. Remarkably, its predictions show that this optimization depends on the context where the movements and the choices are performed, incorporating motivation as part of its internal dynamics.
Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement.
Deep Neural Networks (DNNs) have achieved great success at solving numerous tasks, sometimes surpassing human performance. However, it is still not well understood how they represent data internally and what are the characteristics of these representations. In this talk we will present some research works that study internal representations of DNNs and leverage them for controlled text generation, representation learning and bias analysis.
Xavier Suau holds a PhD in Computer Vision and Machine Learning from BarcelonaTech. Before that, he graduated from BarcelonaTech in Telecommunications Engineering and from Supaéro (Toulouse, France) in Aeronautics and Space Engineering. He is currently a research scientist at Apple's ML Research team, where he conducts research in ML representation learning and robustness. Before joining Apple, Xavier was a co-founder of the start-up Gestoos, an AI centric company tackling human-machine interaction.
In this talk I will describe what working as a data science at Decathlon is like. What are the daily tasks we face as Data Scientist, technologies used and methodologies applied, and present a few interesting projects we are currently working on. If time allows, we will then jump to our last paper accepted with the team @ King's College London, "Discovering and Interpreting Biased Concepts in Online Communities", in which I will present a data-driven method to automatically discover and help interpret biased concepts encoded in word embeddings in the context of NLP and AI Fairness and Algorithmic bias.
Xavier Ferrer Aran is a Data Scientist at Decathlon UK and Visiting Research Associate at King's College London. He obtained his PhD in Informatics in 2017 from the Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC) and the Universitat Autonoma de Barcelona (UAB). Afterwards, he worked as a Research Associate in Digital Discrimination at the Department of Informatics at King's College London, and 1 year ago he started to work as a Data Scientist at Decathlon UK until today. His research interests are at the intersection of applied natural language processing, machine learning and fairness.
L´ètica ha consistit, fins ara, en una relació dels éssers humans entre ells i guiada per ells mateixos, en condicions bàsiques de presencialitat, reciprocitat, discursivitat i intersubjectivitat, tot suposant en cada individu la capacitat de guiar la seva acció, més enllà de l´instint i dels interessos primaris, per l´aprenentatge social de pautes de conducta moral i l´aplicació d´aquestes a partir d´un procés individual de decisió basat en l´ús personal de les facultats de sentir, raonar, voler i reflexionar, comunes amb la resta d´individus.
La computerització transforma totes les esmentades facultats i condicions de decisió de la conducta moral. Hem de veure en quina mesura, i quins poden ser els principals inconvenients i avantatges en relació a allò que encara pensem com a “ètica”. O és que la noció d´aquesta també està en revisió? Ciència i filosofia tenen ara el repte de respondre i apuntar vers alguna direcció favorable als interessos de la humanitat.
Norbert Bilbeny i García és professor universitari, filòsof i escriptor, catedràtic d'ètica a la Universitat de Barcelona. Fou degà de la Facultat de Filosofia, essent escollit el 2011, des d'on va defensar un model d'internacionalització de la investigació catalana i la transferència societat-universitat. Actualment és director del Màster de Ciutadania i Drets Humans, i ho va ser del Màster d'Immigració i Educació intercultural. Entre la seva tasca pedagògica en destaquen estades de professor visitant a universitats estrangeres com ara la Universitat de Chicago, l'Institut Tecnològic i d'Estudis Superiors de Monterrey i la Universitat Loyola de Chicago. Va ser visiting scholar a Berkeley (Facultat de Dret), Harvard, Toronto, CNRS i Northwestern. Darrer llibre: La enfermedad del olvido. El mal del Alzheimer y la persona. http://www.norbertbilbeny.com
The health domain has been an application area of artificial intelligence since the early years. Until very recently the use of artificial intelligence in the health domain has mostly focused on clinical data, including images, genetics, and clinical records. It has not been until recently that data-driven solutions in the health domain started to rely on patient-generated data coming from social networks, mobile and wearable devices. These include applications for classification, health outcome predictions, conversational agents, and recommender systems. This lecture will focus on the human factors and applications of artificial intelligence for empowering people living with chronic conditions. We will discuss the main technical challenges and their human factors implications for the building of actionable and trustworthy solutions that support patients, caregivers, and their clinicians.
Luís Fernández-Luque: My research focus has been on the adaptation of mobile and web technologies for patient support and public health. My scientific contributions in mobile health, which includes both mobile and wearable devices, are among the most cited and pioneering in the field dating back to the year 2006. I have substantial contributions in the creation and validation of Artificial Intelligence applications based on mobile and wearable technologies, including technologies such as deep learning and health recommender systems. My career has been always focused on the crossroads between computer science and behavioral change. I have ample experience in combining human factors research with artificial intelligence, that know-how is of crucial importance for the successful completion of the two aims of the project. My focus on human factors and data-driven applications dates back to my Ph.D. dissertation which focused on trustworthiness aspects of information retrieval of patient education.
As Chief Scientific Officer at Adhera Health (Palo Alto, CA, USA), I oversee the implementation of our research roadmap for our digital therapeutics' platform. Our evidence-based platform combines mobile technologies with artificial intelligence (Recommender Systems) to provide personalized patient support designed to improve the physical and mental wellbeing of people living with chronic conditions. In addition, I am a senior member of the IEEE Engineering in Medicine and Biology Society and Vice-President of the International Medical Informatics Association. I have over 100 publications cited in Google Scholar (https://scholar.google.com/citations?hl=en&user=N9Pdr2IAAAAJ).