CA | ES | EN
Seminars

Next Seminar

Computing Shapley Values under Distributional Uncertainty
Computing Shapley Values under Distributional Uncertainty

05/Mar/2024
05/Mar/2024
 at 
12:00
12:00

Speaker:

Santiago Cifuentes
Santiago Cifuentes

Institution:

University of Buenos Aires
University of Buenos Aires

Language :

EN
EN

Type :

Hybrid
Hybrid

Description:

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).

Link to the webinar (Webinar's virtual room will be accessible 15 minutes before the announced start):
https://rediris.zoom.us/j/8031759956?pwd=ZWdRZmtmUlhWSTlvNWYva0dNYm1qdz09

Meeting ID: 803 175 9956
Passcode: iiiacsic

 

 

  • today
  • Mo
  • Tu
  • We
  • Th
  • Fr
  • Sa
  • Su
 
05/Mar/2024
05/Mar/2024
 at 
12:00
12:00

Computing Shapley Values under Distributional Uncertainty
Computing Shapley Values under Distributional Uncertainty

Santiago Cifuentes
Santiago Cifuentes
 - 
University of Buenos Aires
University of Buenos Aires
 - 
EN
EN

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).

 
07/Mar/2024
07/Mar/2024
 at 
12:00
12:00

Moderated Revision: Modelling what lies between Certainty and Doubt in Belief Change
Moderated Revision: Modelling what lies between Certainty and Doubt in Belief Change

Daniel Grimaldi
Daniel Grimaldi
 - 
University of Buenos Aires
University of Buenos Aires
 - 
EN
EN

In the study of symbolic reasoning, there are various formalisms to represent and model the dynamics of knowledge, among which AGM stands out, introducing belief contraction and revision operators. These operators model the basic decision-making of an agent upon receiving a new observation. When revision is applied, the agent believes in the new observation, or equivalently, disbelieves its negation. Meanwhile, contraction makes the new observation uncertain: if originally the agent consistently believed in the observation, then neither it nor its negation is believed after the contraction. Revision can thus be seen as an operator seeking certainty (either belief or disbelief), and contraction as an operator that pursues doubt (an unsettled state).
In this presentation, we introduce a new family of operators for belief change called moderated revision. This proposal aims to incorporate a feature of balance between certainty and complete doubt. We propose this as a more adequate and general model for belief change, where revision and contraction represent extreme situations. It encompasses a diverse family tree with over 20 subfamilies, including known operators, some adaptations, and novel properties. Among them, wise and imprudent operator families emerge, reflecting responses based on the strength of the new observation: wise operators tend to doubt strong affirmations, while imprudent operators tend towards unwarranted certainty.

Daniel Grimaldi holds a Licentiate degree in Mathematical Sciences from the University of Buenos Aires and is currently completing his PhD in Computer Science at the same institution under the advisor of Prof. Dr. Vanina Martinez and Prof. Dr. Ricardo Rodriguez. Over the past four years, he has been actively engaged in research with the Logic, Language, and Computability Research Group, in addition to participating in seminars with the Buenos Aires Logic Group. This interdisciplinary experience has equipped him with a versatile skill set ideal for research in Belief Change Theory and Knowledge Representation and Reasoning. He has already made contributions to the fields, with publications in IJCAI and KR conferences, as well as in the IJAR journal. Currently, he serves as a full-time teaching assistant in the Department of Computer Science and as a researcher in training in the Institute of Computer Sciences of UBA/CONICET.

 

In the study of symbolic reasoning, there are various formalisms to represent and model the dynamics of knowledge, among which AGM stands out, introducing belief contraction and revision operators. These operators model the basic decision-making of an agent upon receiving a new observation. When revision is applied, the agent believes in the new observation, or equivalently, disbelieves its negation. Meanwhile, contraction makes the new observation uncertain: if originally the agent consistently believed in the observation, then neither it nor its negation is believed after the contraction. Revision can thus be seen as an operator seeking certainty (either belief or disbelief), and contraction as an operator that pursues doubt (an unsettled state).
In this presentation, we introduce a new family of operators for belief change called moderated revision. This proposal aims to incorporate a feature of balance between certainty and complete doubt. We propose this as a more adequate and general model for belief change, where revision and contraction represent extreme situations. It encompasses a diverse family tree with over 20 subfamilies, including known operators, some adaptations, and novel properties. Among them, wise and imprudent operator families emerge, reflecting responses based on the strength of the new observation: wise operators tend to doubt strong affirmations, while imprudent operators tend towards unwarranted certainty.

Daniel Grimaldi holds a Licentiate degree in Mathematical Sciences from the University of Buenos Aires and is currently completing his PhD in Computer Science at the same institution under the advisor of Prof. Dr. Vanina Martinez and Prof. Dr. Ricardo Rodriguez. Over the past four years, he has been actively engaged in research with the Logic, Language, and Computability Research Group, in addition to participating in seminars with the Buenos Aires Logic Group. This interdisciplinary experience has equipped him with a versatile skill set ideal for research in Belief Change Theory and Knowledge Representation and Reasoning. He has already made contributions to the fields, with publications in IJCAI and KR conferences, as well as in the IJAR journal. Currently, he serves as a full-time teaching assistant in the Department of Computer Science and as a researcher in training in the Institute of Computer Sciences of UBA/CONICET.

 

 
12/Mar/2024
12/Mar/2024
 at 
12:00
12:00

Trust - The foundation for human-AI interaction
Trust - The foundation for human-AI interaction

Dr. Annika Bush
Dr. Annika Bush
 - 
Research Center Trustworthy Data Science and Security, Dortmund, Germany
Research Center Trustworthy Data Science and Security, Dortmund, Germany
 - 
EN
EN

Trust is a fundamental aspect of human relationships. It is based on an interplay of different concepts, such as reliance, confidence, and belief in the reliability and integrity of a person or system. Trustworthiness, therefore, encompasses qualities such as honesty, consistency, and competence, which convey a sense of security and certainty in every interaction.

In the interaction between humans and artificial intelligence, trust plays a central role in shaping the dynamics between individuals and artificial intelligence systems. In this context, it is about the reliability of AI in terms of expected performance, the transparency of its processes, and the ethical considerations that guide its decision-making processes. Just as trust is crucial in interpersonal relationships, it is equally important in fostering positive interactions between humans and AI.

In this talk, I will present the concepts of trust and trustworthiness in combination with AI and showcase some research projects on human-AI interaction with regard to diverse topics such as sustainability, education, and medicine.

Trust is a fundamental aspect of human relationships. It is based on an interplay of different concepts, such as reliance, confidence, and belief in the reliability and integrity of a person or system. Trustworthiness, therefore, encompasses qualities such as honesty, consistency, and competence, which convey a sense of security and certainty in every interaction.

In the interaction between humans and artificial intelligence, trust plays a central role in shaping the dynamics between individuals and artificial intelligence systems. In this context, it is about the reliability of AI in terms of expected performance, the transparency of its processes, and the ethical considerations that guide its decision-making processes. Just as trust is crucial in interpersonal relationships, it is equally important in fostering positive interactions between humans and AI.

In this talk, I will present the concepts of trust and trustworthiness in combination with AI and showcase some research projects on human-AI interaction with regard to diverse topics such as sustainability, education, and medicine.

 
29/Jan/2024
29/Jan/2024

FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 3
FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 3

Valeria Giustarini
Valeria Giustarini
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

We use the previous results to characterize finitely generated projective algebras in the two varieties, which turn out to be exactly the finitely presented algebras. From the point of view of the associated logics, via Ghilardi algebraic approach to unification problems [3], this implies that their unification type is (strongly) unitary: there is always a best solution to a unification problem, and it is represented algebraically by the identity homomorphism; this is in parallel to the case of product algebras and DLMV-algebras studied in [1]. The study of unification problems is strongly connected to the study of admissible rules (or, in the algebraic setting, admissible quasiequations); a rule is said to be admissible in a logic if every substitution that makes the premises a theorem of the logic, also makes the conclusion a theorem of the logic.


As a consequence of our results, we get that the logics associated to both product hoops and DLW-hoops are structurally complete, i.e. the admissibility of rules coincides with their derivability; using results in [2], we can actually conclude that the two logics are universally complete, that is, admissibility coincides with derivability also for multiple-conclusion rules.


References
[1] Aglianò, P., Ugolini, S.: Projectivity and unification in substructural logics of generalized rotations. Interna- tional Journal of Approximate Reasoning 153, 172–192, (2023).
[2] Aglianò, P., Ugolini, S.: Structural and universal completeness in algebra and logic. Submitted, (2023). arXiv:2309.14151
[3] Ghilardi S.: Unification through projectivity, J. Logic Comput. 7, 733–752, (1997).

We use the previous results to characterize finitely generated projective algebras in the two varieties, which turn out to be exactly the finitely presented algebras. From the point of view of the associated logics, via Ghilardi algebraic approach to unification problems [3], this implies that their unification type is (strongly) unitary: there is always a best solution to a unification problem, and it is represented algebraically by the identity homomorphism; this is in parallel to the case of product algebras and DLMV-algebras studied in [1]. The study of unification problems is strongly connected to the study of admissible rules (or, in the algebraic setting, admissible quasiequations); a rule is said to be admissible in a logic if every substitution that makes the premises a theorem of the logic, also makes the conclusion a theorem of the logic.


As a consequence of our results, we get that the logics associated to both product hoops and DLW-hoops are structurally complete, i.e. the admissibility of rules coincides with their derivability; using results in [2], we can actually conclude that the two logics are universally complete, that is, admissibility coincides with derivability also for multiple-conclusion rules.


References
[1] Aglianò, P., Ugolini, S.: Projectivity and unification in substructural logics of generalized rotations. Interna- tional Journal of Approximate Reasoning 153, 172–192, (2023).
[2] Aglianò, P., Ugolini, S.: Structural and universal completeness in algebra and logic. Submitted, (2023). arXiv:2309.14151
[3] Ghilardi S.: Unification through projectivity, J. Logic Comput. 7, 733–752, (1997).

 
10/Jan/2024
10/Jan/2024

FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 2
FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 2

Valeria Giustarini
Valeria Giustarini
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

The main aim of this part is to deepen the understanding of the variety of product algebras P and the variety DLMV generated by perfect MV-algebras, investigating in particular the role of the falsum constant 0. As one of the main outcome of this work, we go back from hoops to the corresponding 0-bounded varieties, and we exhibit the free functor from the varieties of hoops of interest to the corresponding 0-bounded varieties. In other words, we show a construction that freely adds the falsum constant 0: starting from a product hoop (or a DLW-hoop) we obtain the product algebra (DLMV-algebra) freely generated by it.
The construction for DLW-hoops is shown to coincide with the MV-closure introduced in [1].

References
[1] Abad, M.,Castano, D., Varela, J.: MV-closures of Wajsberg hoops and applications. Algebra Universalis 64, 213–230, (2010).

The main aim of this part is to deepen the understanding of the variety of product algebras P and the variety DLMV generated by perfect MV-algebras, investigating in particular the role of the falsum constant 0. As one of the main outcome of this work, we go back from hoops to the corresponding 0-bounded varieties, and we exhibit the free functor from the varieties of hoops of interest to the corresponding 0-bounded varieties. In other words, we show a construction that freely adds the falsum constant 0: starting from a product hoop (or a DLW-hoop) we obtain the product algebra (DLMV-algebra) freely generated by it.
The construction for DLW-hoops is shown to coincide with the MV-closure introduced in [1].

References
[1] Abad, M.,Castano, D., Varela, J.: MV-closures of Wajsberg hoops and applications. Algebra Universalis 64, 213–230, (2010).

 
18/Dec/2023
18/Dec/2023

FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 1
FREE CONSTRUCTIONS IN HOOPS VIA l-GROUPS - Part 1

Valeria Giustarini
Valeria Giustarini
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

Lattice-ordered abelian groups, or abelian l-groups in what follows, are categorically equivalent to two classes of 0-bounded hoops that are relevant in the realm of the equivalent algebraic semantics of many-valued logics: liftings of cancellative hoops and perfect MV-algebras. The former generate the variety of product algebras, and the latter the subvariety of MV-algebras generated by perfect MV-algebras, that we shall call DLMV. In this seminar we focus on these two varieties and their relation to the structures obtained by forgetting the falsum constant 0, i.e., product hoops and DLW-hoops.


A first main result is a characterization of the free algebras over an arbitrary set of generators in the two varieties of product and DLW-hoops; the latter are obtained as particular subreducts of the corresponding free algebras in the 0-bounded varieties. More precisely, we obtain a representation in terms of weak Boolean products of which we characterize the factors. This kind of description for 0-bounded residuated lattices is present in the literature, but we are not aware of analogous results for varieties of residuated structures with just the constant 1.


We observe that in a variety that is the equivalent algebraic semantics of a logic, free (finitely generated) algebras are isomorphic to the Lindenbaum-Tarski algebras of formulas of the logic; thus their study is important from both the perspective of algebra and logic.

Lattice-ordered abelian groups, or abelian l-groups in what follows, are categorically equivalent to two classes of 0-bounded hoops that are relevant in the realm of the equivalent algebraic semantics of many-valued logics: liftings of cancellative hoops and perfect MV-algebras. The former generate the variety of product algebras, and the latter the subvariety of MV-algebras generated by perfect MV-algebras, that we shall call DLMV. In this seminar we focus on these two varieties and their relation to the structures obtained by forgetting the falsum constant 0, i.e., product hoops and DLW-hoops.


A first main result is a characterization of the free algebras over an arbitrary set of generators in the two varieties of product and DLW-hoops; the latter are obtained as particular subreducts of the corresponding free algebras in the 0-bounded varieties. More precisely, we obtain a representation in terms of weak Boolean products of which we characterize the factors. This kind of description for 0-bounded residuated lattices is present in the literature, but we are not aware of analogous results for varieties of residuated structures with just the constant 1.


We observe that in a variety that is the equivalent algebraic semantics of a logic, free (finitely generated) algebras are isomorphic to the Lindenbaum-Tarski algebras of formulas of the logic; thus their study is important from both the perspective of algebra and logic.

 
05/Dec/2023
05/Dec/2023

The Giga initiative at UNICEF
The Giga initiative at UNICEF

Dohyung Kim, PhD.
Dohyung Kim, PhD.
 - 
Giga initiative, UNICEF
Giga initiative, UNICEF
 - 
EN
EN

Giga is a collaborative project between UNICEF and the ITU, focusing on bringing internet connectivity to educational institutions. A critical need for the team is accurate data on school locations. Traditional methods of collecting this data are resource-intensive and challenging, especially in remote areas. To address these challenges, we are advancing techniques that combine high-resolution satellite imagery with computer vision to enhance data gathering. While these methods have shown success in specific countries, such as Sudan, further advancements are needed to create scalable solutions.

Dr. Dohyung Kim serves as the Data Science Lead for the Giga initiative at UNICEF, where he spearheads the integration of machine learning (ML) and earth observation data. He holds a PhD in Geographical Sciences from the University of Maryland, specializing in remote sensing for global forest monitoring. Prior to his role at UNICEF, Dr. Kim contributed his expertise as a postdoctoral researcher at NASA and has also worked with various international organizations, including the United Nations Environment Programme (UNEP) and the World Bank.

Giga is a collaborative project between UNICEF and the ITU, focusing on bringing internet connectivity to educational institutions. A critical need for the team is accurate data on school locations. Traditional methods of collecting this data are resource-intensive and challenging, especially in remote areas. To address these challenges, we are advancing techniques that combine high-resolution satellite imagery with computer vision to enhance data gathering. While these methods have shown success in specific countries, such as Sudan, further advancements are needed to create scalable solutions.

Dr. Dohyung Kim serves as the Data Science Lead for the Giga initiative at UNICEF, where he spearheads the integration of machine learning (ML) and earth observation data. He holds a PhD in Geographical Sciences from the University of Maryland, specializing in remote sensing for global forest monitoring. Prior to his role at UNICEF, Dr. Kim contributed his expertise as a postdoctoral researcher at NASA and has also worked with various international organizations, including the United Nations Environment Programme (UNEP) and the World Bank.

 
16/Nov/2023
16/Nov/2023

Explorando la Inteligencia Artificial: Mi Travesía como Investigadora en Chile, Proyectos Adjudicados y el Estado Actual de la IA
Explorando la Inteligencia Artificial: Mi Travesía como Investigadora en Chile, Proyectos Adjudicados y el Estado Actual de la IA

Carola Andrea Figueroa Flores
Carola Andrea Figueroa Flores
 - 
Universidad del Bío-Bío
Universidad del Bío-Bío
 - 
SP
SP

Durante esta charla, relataré mi experiencia personal como investigadora en el área de la IA, resaltando proyectos de investigación científica y aplicada,  antecedentes de los programas de posgrado de  mi universidad y de qué forma se podrían generar colaboraciones institucionales. Además se  analizará  la evolución de la Inteligencia Artificial en Chile y la situación actual, prestando especial atención a los desafíos, como la regulación, y las oportunidades que se nos presentan en este campo en constante cambio.

Carola Andrea Figueroa Flores es profesora Asistente A en el Departamento de Ciencias de la Computación y Tecnologías de la Información de la Universidad del Bío-Bío, ha desempeñado un papel activo como investigadora, directora y co investigadora en diversos proyectos interdisciplinarios  con un enfoque aplicado  y financiados por la Agencia Nacional de Investigación y Desarrollo (ANID) de Chile.  Su formación académica incluye un PhD in Computer Science  de la Universidad Autónoma de Barcelona, un Magíster en Ciencias de la Computación de la Universidad de Concepción, así como una Licenciatura en Ciencias de la Informática y un título de Ingeniero Civil en Informática de la Universidad del Bío-Bío. Su investigación se enfoca en la aplicación de la Inteligencia Artificial a diversos desafíos sociales, abarcando áreas como la salud, agricultura, medio ambiente, transporte y finanzas, utilizando técnicas como el aprendizaje automático, visión por computadora, procesamiento del lenguaje natural, ciencia de datos y reconocimiento de patrones. Los resultados de sus investigaciones han sido publicados en destacadas revistas científicas internacionales. Además, ha participado en numerosas conferencias a nivel nacional e internacional, así como en conversatorios y charlas para difundir el impacto y las oportunidades de la IA a la comunidad y hospitales de las regiones de Bío-Bío y Ñuble. También, es parte de importantes iniciativas relacionadas con la prevención y control de incendios forestales en la Región del Bío-Bío, así como de la mesa de trabajo sobre Inteligencia Artificial en el Senado y la biblioteca del Congreso Nacional, contribuyendo activamente en la elaboración de políticas públicas en este campo.

Durante esta charla, relataré mi experiencia personal como investigadora en el área de la IA, resaltando proyectos de investigación científica y aplicada,  antecedentes de los programas de posgrado de  mi universidad y de qué forma se podrían generar colaboraciones institucionales. Además se  analizará  la evolución de la Inteligencia Artificial en Chile y la situación actual, prestando especial atención a los desafíos, como la regulación, y las oportunidades que se nos presentan en este campo en constante cambio.

Carola Andrea Figueroa Flores es profesora Asistente A en el Departamento de Ciencias de la Computación y Tecnologías de la Información de la Universidad del Bío-Bío, ha desempeñado un papel activo como investigadora, directora y co investigadora en diversos proyectos interdisciplinarios  con un enfoque aplicado  y financiados por la Agencia Nacional de Investigación y Desarrollo (ANID) de Chile.  Su formación académica incluye un PhD in Computer Science  de la Universidad Autónoma de Barcelona, un Magíster en Ciencias de la Computación de la Universidad de Concepción, así como una Licenciatura en Ciencias de la Informática y un título de Ingeniero Civil en Informática de la Universidad del Bío-Bío. Su investigación se enfoca en la aplicación de la Inteligencia Artificial a diversos desafíos sociales, abarcando áreas como la salud, agricultura, medio ambiente, transporte y finanzas, utilizando técnicas como el aprendizaje automático, visión por computadora, procesamiento del lenguaje natural, ciencia de datos y reconocimiento de patrones. Los resultados de sus investigaciones han sido publicados en destacadas revistas científicas internacionales. Además, ha participado en numerosas conferencias a nivel nacional e internacional, así como en conversatorios y charlas para difundir el impacto y las oportunidades de la IA a la comunidad y hospitales de las regiones de Bío-Bío y Ñuble. También, es parte de importantes iniciativas relacionadas con la prevención y control de incendios forestales en la Región del Bío-Bío, así como de la mesa de trabajo sobre Inteligencia Artificial en el Senado y la biblioteca del Congreso Nacional, contribuyendo activamente en la elaboración de políticas públicas en este campo.

 
11/Oct/2023
11/Oct/2023

Quantum federated learning (part II)
Quantum federated learning (part II)

Rocco Ballester Benito
Rocco Ballester Benito
 - 
Enzyme
Enzyme
 - 
EN
EN

Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.

Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.

 
10/Oct/2023
10/Oct/2023

Quantum federated learning (part I)
Quantum federated learning (part I)

Rocco Ballester Benito
Rocco Ballester Benito
 - 
Enzyme
Enzyme
 - 
EN
EN

Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.

Federated Learning (FL) is a recent technique that emerges in order to handle the huge amount of training data needed in machine learning algorithms and the concern of privacy challenges in such models. Simultaneously, the field of Quantum Computing (QC) has grown exponentially and quantum properties such as entanglement and superposition had demonstrated to be more efficient in certain machine learning tasks, given raise to the field known as Quantum Machine Learning (QML). Thus, a handful of articles have recently studied a possible Quantum Federated Learning (QFL) framework. This paper presents an exhaustive search on this topic and aims to fill the gap in the literature in a comprehensive way. Moreover, it offers an original taxonomy of the field and proposes future challenges and remarks.

 
20/Sep/2023
20/Sep/2023

Free constructions for product hoops
Free constructions for product hoops

Valeria Giustarini
Valeria Giustarini
 - 
University of Siena
University of Siena
 - 
EN
EN

Product logic is, together with Łukasiewicz logic and Gödel logic, one of the fundamental logics in Hájek’s framework of fuzzy logics arising from a continuos t-norm. In algebraic terms, this means that product algebras are one of the most relevant subvarieties of BL-algebras. From the algebraic perspective, representations of product algebras have mostly highlighted their connection with cancellative hoops; however, not much is known about the relation between product algebras and the variety of residuated lattices that constitutes their 0-free subreducts: product hoops. This contribution focuses on two main results: 1) we prove that product hoops coincide with the class of maximal filters of product algebras, seen as residuated lattices; 2) we show a construction that given any product hoop H obtains a product algebra freely generated by H; in terms of the corresponding algebraic categories, we exhibit the free functor, i.e. the left adjoint to the forgetful functor from product algebras to product hoops. 

 

Product logic is, together with Łukasiewicz logic and Gödel logic, one of the fundamental logics in Hájek’s framework of fuzzy logics arising from a continuos t-norm. In algebraic terms, this means that product algebras are one of the most relevant subvarieties of BL-algebras. From the algebraic perspective, representations of product algebras have mostly highlighted their connection with cancellative hoops; however, not much is known about the relation between product algebras and the variety of residuated lattices that constitutes their 0-free subreducts: product hoops. This contribution focuses on two main results: 1) we prove that product hoops coincide with the class of maximal filters of product algebras, seen as residuated lattices; 2) we show a construction that given any product hoop H obtains a product algebra freely generated by H; in terms of the corresponding algebraic categories, we exhibit the free functor, i.e. the left adjoint to the forgetful functor from product algebras to product hoops. 

 

 
18/Sep/2023
18/Sep/2023

Weak-Ensconcement: defining a new non-prioritized contraction operator
Weak-Ensconcement: defining a new non-prioritized contraction operator

Alejandro Joaquin Mercado
Alejandro Joaquin Mercado
 - 
Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales
Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales
 - 
EN
EN

In this talk I will present the results of my undergraduate thesis on Belief Revision. I will present the weak-ensconcement: a non-prioritized order-based constructive framework for building a contraction operator. This operator characterizes an interesting family of Shielded base contractions. In turn, this characterization induces a class of AGM contractions satisfying certain postulates. Finally, I will show a connection among the class of contractions given by the weak ensconcement and other kinds of belief base contraction operators. In doing so, I will also point out a flaw I discovered in the original theorems that link the epistemic entrenchment with ensconcement (which are well established in the literature), and then I'll provide two possible solutions.

Alejandro Joaquin Mercado: "I am 23 years old and I've recently obtained my undergraduate degree in Computer Science at the University of Buenos Aires. Last year I won an scholarship to learn about Belief Revision and write my final thesis on that subject under the supervision Prof. Ricardo Rodriguez. Doing such research I've obtained intriguing results, which we then submitted as a paper at KR which was accepted for the conference's main track. Currently, I'm in the look start a PhD. My current interest is on Safe and Explainable AI. I'm curious about mixing my formal background in logic and formal proof with machine learning algorithms in order to build more explainable models".

In this talk I will present the results of my undergraduate thesis on Belief Revision. I will present the weak-ensconcement: a non-prioritized order-based constructive framework for building a contraction operator. This operator characterizes an interesting family of Shielded base contractions. In turn, this characterization induces a class of AGM contractions satisfying certain postulates. Finally, I will show a connection among the class of contractions given by the weak ensconcement and other kinds of belief base contraction operators. In doing so, I will also point out a flaw I discovered in the original theorems that link the epistemic entrenchment with ensconcement (which are well established in the literature), and then I'll provide two possible solutions.

Alejandro Joaquin Mercado: "I am 23 years old and I've recently obtained my undergraduate degree in Computer Science at the University of Buenos Aires. Last year I won an scholarship to learn about Belief Revision and write my final thesis on that subject under the supervision Prof. Ricardo Rodriguez. Doing such research I've obtained intriguing results, which we then submitted as a paper at KR which was accepted for the conference's main track. Currently, I'm in the look start a PhD. My current interest is on Safe and Explainable AI. I'm curious about mixing my formal background in logic and formal proof with machine learning algorithms in order to build more explainable models".

 
27/Jun/2023
27/Jun/2023

New Computational Socials Sciences program at BSC
New Computational Socials Sciences program at BSC

Mercè Crosas
Mercè Crosas
 - 
Barcelona Supercomputing Center (BSC)
Barcelona Supercomputing Center (BSC)
 - 
EN
EN

The Barcelona Supercomputing Center (BSC) has recently started a new program on Computational Social Sciences that aims to create and provide support to new projects that combine research in the social sciences with the analysis of large amounts of data or artificial intelligence that usually would require high-performance computing. This talk will present the new program and its objectives.

Mercè Crosas is a researcher at the Barcelona Supercomputing Center (BSC) expert in data science, data management, and open data and FAIR data (Findable, Accessible, Interoperable, Reusable). From early 2023, Crosas is the Head of Computational Social Sciences Program at the BSC, a new program that aims to facilitate the use of data and computing in social science and humanities research, and to create and support computational studies in these areas.

Before this position, Crosas was the Secretary of Open Government at the Generalitat de Catalunya from 2021 to 2022, a high-ranked government position responsible for open data, transparency, and citizen participation. Most of her professional career has been spent at Harvard University, eventually as Chief Data Science and Technology Officer at the Institute for Quantitative Social Sciences and University Research Data Management Officer. She has also worked in the development of data systems in biotechnology^ companies and has conducted research and built scientific software in astrophysics at the Harvard-Smithsonian Center for Astrophysics. Crosas holds a doctorate in Astrophysics from Rice University and a degree in Physics from the University of Barcelona.

The Barcelona Supercomputing Center (BSC) has recently started a new program on Computational Social Sciences that aims to create and provide support to new projects that combine research in the social sciences with the analysis of large amounts of data or artificial intelligence that usually would require high-performance computing. This talk will present the new program and its objectives.

Mercè Crosas is a researcher at the Barcelona Supercomputing Center (BSC) expert in data science, data management, and open data and FAIR data (Findable, Accessible, Interoperable, Reusable). From early 2023, Crosas is the Head of Computational Social Sciences Program at the BSC, a new program that aims to facilitate the use of data and computing in social science and humanities research, and to create and support computational studies in these areas.

Before this position, Crosas was the Secretary of Open Government at the Generalitat de Catalunya from 2021 to 2022, a high-ranked government position responsible for open data, transparency, and citizen participation. Most of her professional career has been spent at Harvard University, eventually as Chief Data Science and Technology Officer at the Institute for Quantitative Social Sciences and University Research Data Management Officer. She has also worked in the development of data systems in biotechnology^ companies and has conducted research and built scientific software in astrophysics at the Harvard-Smithsonian Center for Astrophysics. Crosas holds a doctorate in Astrophysics from Rice University and a degree in Physics from the University of Barcelona.

 
20/Jun/2023
20/Jun/2023

A Logical System for Reasoning with Scientific Hypotheses
A Logical System for Reasoning with Scientific Hypotheses

Esther Anna Corsi
Esther Anna Corsi
 - 
Department of Philosophy, University of Milan, Italy
Department of Philosophy, University of Milan, Italy
 - 
EN
EN

We introduce and investigate a family of non-monotonic consequence relations which are motivated by the goal of capturing important patterns of scientific inference.


Esther Anna Corsi: I am a PostDoc in the Logic Group in the Department of Philosophy of the University of Milan. I obtained my PhD in Computer Science at the TU Wien,  Theory and Logic Group in the Institute of Computer Languages under the supervision of Chris Fermüller.

We introduce and investigate a family of non-monotonic consequence relations which are motivated by the goal of capturing important patterns of scientific inference.


Esther Anna Corsi: I am a PostDoc in the Logic Group in the Department of Philosophy of the University of Milan. I obtained my PhD in Computer Science at the TU Wien,  Theory and Logic Group in the Institute of Computer Languages under the supervision of Chris Fermüller.

 
13/Jun/2023
13/Jun/2023

Videogames that reason to foster spatial skills
Videogames that reason to foster spatial skills

Dr.-Ing. Zoe Falomir Llansola
Dr.-Ing. Zoe Falomir Llansola
 - 
Universitat Jaume I
Universitat Jaume I
 - 
EN
EN

The challenge is to propose approaches which can solve spatial tests used to measure humans’ intelligence. On one side, we can apply these approaches in smart systems (i.e. computer games, robots) so that they can improve their spatial thinking. On the other side, we can use these approaches to help improve humans’ spatial thinking by providing them with useful feedback. I will present and discuss two cases: (1) the cube rotation test  and (2) the paper folding-and-punching test.

Zoe Falomir Llansola: Currently, I am a Ramon-y-Cajal fellow at Universitat Jaume I (UJI), Castellón, Spain. Before that, I have been a postdoc researcher for 7 years at the Spatial Cognition Center, at the University of Bremen, Germany, where I was principal investigator in projects bridging the sensory-semantic gap.I am a doctor engineer in computer science. I got my joint PhD title by UJI, Spain (Dr.) and also by University of Bremen (Dr.-Ing.). I also carried out research transfer to industry at Cognitive Robots SL where I applied results of my PhD thesis to the automation of mosaic assembling and I got the Castellón City Award for Experimental Sciences and Technology for this work. At the moment I am developing reasoning algorithms to solve spatial reasoning challenges and testing them in videogames which can be used to train people's skills. We intend to transfer these applications to education institutions in the near future. My research expertise lies in Qualitative Reasoning, Knowledge Representation techniques, Human-Machine Interaction, Machine Learning, Colour Cognition, Bioinformatics, Geographic Information Systems and Creative and Spatial Problem Solving.

The challenge is to propose approaches which can solve spatial tests used to measure humans’ intelligence. On one side, we can apply these approaches in smart systems (i.e. computer games, robots) so that they can improve their spatial thinking. On the other side, we can use these approaches to help improve humans’ spatial thinking by providing them with useful feedback. I will present and discuss two cases: (1) the cube rotation test  and (2) the paper folding-and-punching test.

Zoe Falomir Llansola: Currently, I am a Ramon-y-Cajal fellow at Universitat Jaume I (UJI), Castellón, Spain. Before that, I have been a postdoc researcher for 7 years at the Spatial Cognition Center, at the University of Bremen, Germany, where I was principal investigator in projects bridging the sensory-semantic gap.I am a doctor engineer in computer science. I got my joint PhD title by UJI, Spain (Dr.) and also by University of Bremen (Dr.-Ing.). I also carried out research transfer to industry at Cognitive Robots SL where I applied results of my PhD thesis to the automation of mosaic assembling and I got the Castellón City Award for Experimental Sciences and Technology for this work. At the moment I am developing reasoning algorithms to solve spatial reasoning challenges and testing them in videogames which can be used to train people's skills. We intend to transfer these applications to education institutions in the near future. My research expertise lies in Qualitative Reasoning, Knowledge Representation techniques, Human-Machine Interaction, Machine Learning, Colour Cognition, Bioinformatics, Geographic Information Systems and Creative and Spatial Problem Solving.

 
06/Jun/2023
06/Jun/2023

Nicolás Copérnico: biografía, contribución, impacto.
Nicolás Copérnico: biografía, contribución, impacto.

Pedro Meseguer
Pedro Meseguer
 - 
IIIA-CSIC
IIIA-CSIC
 - 
SP
SP

Nicolás Copérnico fue el astrónomo que cambió la concepción geocéntrica por la heliocéntrica: el Sol estaba en el centro y los planetas, la Tierra entre ellos, giraban en torno a él. Este año se cumple el 550 aniversario de su nacimiento, y es una buena ocasión para recordarlo. Con este espíritu, propongo dar un repaso a la biografía de Copérnico, con énfasis en su formación y sus cautelas para publicar el nuevo modelo planetario. Naturalmente, me detendré en su trascendental contribución científica, y el impacto que tuvo en astrónomos posteriores —Kepler, Galileo— hasta llegar a Newton. Por último, repasaré el impacto que este cambio tuvo en la concepción del mundo en las mentes

Pedro Meseguer es investigador científico del CSIC en el IIIA. Licenciado en ciencias físicas y en informática, se doctoró en la UPC en 1992. Su trabajo de investigación se ha centrado en razonamiento con restricciones y en búsqueda heurística, temas sobre los que ha realizado numerosas publicaciones. Ha participado en diversos proyectos de investigación, y ha desarrollado tareas editoriales en revistas especializadas. En paralelo a su trabajo de investigación, ha realizado varias tareas docentes, de dirección de tesis doctorales y de servicio a la comunidad de IA. Es EurAI fellow. 

Nicolás Copérnico fue el astrónomo que cambió la concepción geocéntrica por la heliocéntrica: el Sol estaba en el centro y los planetas, la Tierra entre ellos, giraban en torno a él. Este año se cumple el 550 aniversario de su nacimiento, y es una buena ocasión para recordarlo. Con este espíritu, propongo dar un repaso a la biografía de Copérnico, con énfasis en su formación y sus cautelas para publicar el nuevo modelo planetario. Naturalmente, me detendré en su trascendental contribución científica, y el impacto que tuvo en astrónomos posteriores —Kepler, Galileo— hasta llegar a Newton. Por último, repasaré el impacto que este cambio tuvo en la concepción del mundo en las mentes

Pedro Meseguer es investigador científico del CSIC en el IIIA. Licenciado en ciencias físicas y en informática, se doctoró en la UPC en 1992. Su trabajo de investigación se ha centrado en razonamiento con restricciones y en búsqueda heurística, temas sobre los que ha realizado numerosas publicaciones. Ha participado en diversos proyectos de investigación, y ha desarrollado tareas editoriales en revistas especializadas. En paralelo a su trabajo de investigación, ha realizado varias tareas docentes, de dirección de tesis doctorales y de servicio a la comunidad de IA. Es EurAI fellow. 

 
30/May/2023
30/May/2023

Paper assignment in large conferences: algorithms and experiences
Paper assignment in large conferences: algorithms and experiences

Francisco Cruz
Francisco Cruz
 - 
IJCAI, Brainful Labs
IJCAI, Brainful Labs
 - 
EN
EN

This presentation provides an overview of paper assignment processes within the context of large conferences. We will expose the algorithms and techniques employed in computing assignments in last editions of selected conferences, highlighting their effectiveness and the problems. Furthermore, we will discuss the various threats we aim to mitigate in this process, while also exploring the key factors contributing to reviewer satisfaction and also looking at the instances of dissatisfaction. Finally, we will present some ideas that can help the assignment process in the future.
 

Francisco Cruz: "I had the opportunity to join the IIIA-CSIC staff back in year 2k, during my years at the IIIA-CSIC I started to collaborate with different organizations such as IJCAI in 2011 where I started with a small role. Nowadays I am fully employed by IJCAI and have two companies providing different services to large conference organizations such as IJCAI, AAAI, and ECAI. Throughout the past year, my focus has been on developing a software tailored to assist conference organizers in streamlining their review processes.We aim to enhance the efficiency and effectiveness of the review process."

This presentation provides an overview of paper assignment processes within the context of large conferences. We will expose the algorithms and techniques employed in computing assignments in last editions of selected conferences, highlighting their effectiveness and the problems. Furthermore, we will discuss the various threats we aim to mitigate in this process, while also exploring the key factors contributing to reviewer satisfaction and also looking at the instances of dissatisfaction. Finally, we will present some ideas that can help the assignment process in the future.
 

Francisco Cruz: "I had the opportunity to join the IIIA-CSIC staff back in year 2k, during my years at the IIIA-CSIC I started to collaborate with different organizations such as IJCAI in 2011 where I started with a small role. Nowadays I am fully employed by IJCAI and have two companies providing different services to large conference organizations such as IJCAI, AAAI, and ECAI. Throughout the past year, my focus has been on developing a software tailored to assist conference organizers in streamlining their review processes.We aim to enhance the efficiency and effectiveness of the review process."

 
23/May/2023
23/May/2023

Challenges and Opportunities of Asynchronous Multi-Agent Reinforcement Learning
Challenges and Opportunities of Asynchronous Multi-Agent Reinforcement Learning

Enrico Marchesini
Enrico Marchesini
 - 
Northeastern University
Northeastern University
 - 
EN
EN

Real setups pose significant challenges for modern Deep Reinforcement Learning algorithms; agents struggle to explore high-dimensional environments and have to provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to interact while acting asynchronously through temporally extended actions. I will present our work on fostering diversified exploration and safety in real domains of interest. We tackle these problems from different angles, such as (i) using Evolutionary Algorithms as a natural way to foster diversity; (ii) leveraging Formal Verification to characterize policies' decision-making and designing novel safety metrics to optimize; (iii) designing macro-action-based algorithms to learn coordination among asynchronous agents.

Enrico Marchesini is a Postdoctoral research associate in the Khoury College of Computer Sciences at Northeastern University, advised by Christopher Amato. He completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of Deep Reinforcement Learning. For this reason, he is designing novel algorithms for multi-agent systems while promoting efficient exploration and safety in asynchronous setups.

Real setups pose significant challenges for modern Deep Reinforcement Learning algorithms; agents struggle to explore high-dimensional environments and have to provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to interact while acting asynchronously through temporally extended actions. I will present our work on fostering diversified exploration and safety in real domains of interest. We tackle these problems from different angles, such as (i) using Evolutionary Algorithms as a natural way to foster diversity; (ii) leveraging Formal Verification to characterize policies' decision-making and designing novel safety metrics to optimize; (iii) designing macro-action-based algorithms to learn coordination among asynchronous agents.

Enrico Marchesini is a Postdoctoral research associate in the Khoury College of Computer Sciences at Northeastern University, advised by Christopher Amato. He completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of Deep Reinforcement Learning. For this reason, he is designing novel algorithms for multi-agent systems while promoting efficient exploration and safety in asynchronous setups.

 
16/May/2023
16/May/2023

Cooperative Control of Environmental Extremes: A Multi-Agent Reinforcement Learning Study on Collaboration and Human Biases
Cooperative Control of Environmental Extremes: A Multi-Agent Reinforcement Learning Study on Collaboration and Human Biases

Martí Sánchez-Fibla
Martí Sánchez-Fibla
 - 
UPF, IIIA (CSIC)
UPF, IIIA (CSIC)
 - 
EN
EN

In this talk, we explore the emergence of collaboration in multiagent reinforcement learning (MARL) through various examples, highlighting the role of human biases such as loss aversion and their impact on collaborative dynamics. I will then focus on a recent paper co-authored by Ricard Sole and Clement Moulin-Frier (https://arxiv.org/abs/2212.02395), in which we model adaptive dynamics in complex ecosystems. Building on the Forest-Fire model, the study uses fire as an external, fluctuating force in a simulated environment. Agents must balance tree harvesting and fire avoidance, resulting in the evolution of an ecological engineering strategy that optimally maintains biomass while suppressing large fires. We will discuss the implications of these findings for AI management of complex ecosystems, emphasizing the potential benefits of incorporating MARL and collaboration strategies into environmental management and conservation efforts.

Martí Sánchez-Fibla is a Tenure Track researcher at UPF and has recently been granted a research scientist position at IIIA, CSIC. He is currently leading a research industrial project red.es (http://red.es/) with the company BMAT and he has previously been principal investigator of the Plan Nacional INSOCO. His research is focused on the areas of Constraint Optimization (inference and search algorithms for problem solving), and NeuroRobotics (cognitive architectures, sensorimotor learning, adaptability), and Complex Systems.

In this talk, we explore the emergence of collaboration in multiagent reinforcement learning (MARL) through various examples, highlighting the role of human biases such as loss aversion and their impact on collaborative dynamics. I will then focus on a recent paper co-authored by Ricard Sole and Clement Moulin-Frier (https://arxiv.org/abs/2212.02395), in which we model adaptive dynamics in complex ecosystems. Building on the Forest-Fire model, the study uses fire as an external, fluctuating force in a simulated environment. Agents must balance tree harvesting and fire avoidance, resulting in the evolution of an ecological engineering strategy that optimally maintains biomass while suppressing large fires. We will discuss the implications of these findings for AI management of complex ecosystems, emphasizing the potential benefits of incorporating MARL and collaboration strategies into environmental management and conservation efforts.

Martí Sánchez-Fibla is a Tenure Track researcher at UPF and has recently been granted a research scientist position at IIIA, CSIC. He is currently leading a research industrial project red.es (http://red.es/) with the company BMAT and he has previously been principal investigator of the Plan Nacional INSOCO. His research is focused on the areas of Constraint Optimization (inference and search algorithms for problem solving), and NeuroRobotics (cognitive architectures, sensorimotor learning, adaptability), and Complex Systems.

 
18/Apr/2023
18/Apr/2023

Simulation-based Bayesian Optimization
Simulation-based Bayesian Optimization

Roi Naveiro
Roi Naveiro
 - 
CUNEF Universidad
CUNEF Universidad
 - 
EN
EN

In this presentation, I will provide a concise overview of my current research focused on molecular design in the small data regime. Specifically, I will discuss the importance of developing reliable predictive models that effectively quantify uncertainty, which is crucial for effective molecular design. One promising technique for achieving this goal is active learning through Bayesian Optimization. This will motivate the second part of my talk, where I will introduce a novel simulation-based approach for Bayesian Optimization, which has the potential to improve the efficacy of molecular design. I will analyze the convergence issues related to this approach and present empirical evidence of its effectiveness.

Roi Naveiro currently holds a Tenure Track Assistant Professor position at CUNEF Universidad. He is BSc in Physics from the University of Salamanca, MSc in Theoretical Physics and PhD in Statistics and Operations Research from the Complutense University of Madrid. His work focuses on probabilistic machine learning, Bayesian statistics and decision theory, as well as their applications to problems related to drug Discovery and materials design, among others. Naveiro has published more than 10 articles in international journals and one book. He has participated in more than five national and European research projects, and has been principal investigator in three projects with industry. In addition, he actively collaborates with AItenea Biotech, a spin-off from the Spanish National Research Council’s focused on molecular design. He has been visitor at Duke University and the Statistical and Applied Mathematical Sciences Institute (Durham, North Carolina, USA).

 

In this presentation, I will provide a concise overview of my current research focused on molecular design in the small data regime. Specifically, I will discuss the importance of developing reliable predictive models that effectively quantify uncertainty, which is crucial for effective molecular design. One promising technique for achieving this goal is active learning through Bayesian Optimization. This will motivate the second part of my talk, where I will introduce a novel simulation-based approach for Bayesian Optimization, which has the potential to improve the efficacy of molecular design. I will analyze the convergence issues related to this approach and present empirical evidence of its effectiveness.

Roi Naveiro currently holds a Tenure Track Assistant Professor position at CUNEF Universidad. He is BSc in Physics from the University of Salamanca, MSc in Theoretical Physics and PhD in Statistics and Operations Research from the Complutense University of Madrid. His work focuses on probabilistic machine learning, Bayesian statistics and decision theory, as well as their applications to problems related to drug Discovery and materials design, among others. Naveiro has published more than 10 articles in international journals and one book. He has participated in more than five national and European research projects, and has been principal investigator in three projects with industry. In addition, he actively collaborates with AItenea Biotech, a spin-off from the Spanish National Research Council’s focused on molecular design. He has been visitor at Duke University and the Statistical and Applied Mathematical Sciences Institute (Durham, North Carolina, USA).

 

 
30/Mar/2023
30/Mar/2023

Analysis of Decision-Making Tasks in Recurrent Neural Networks
Analysis of Decision-Making Tasks in Recurrent Neural Networks

Cecilia Gisele Jarne
Cecilia Gisele Jarne
 - 
[A] Dep. de Ciencia y Tecnología de la Univ. Nacional de Quilmes - Bernal, Buenos Aires, Argentina. [B] CONICET: Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina. [C] Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
[A] Dep. de Ciencia y Tecnología de la Univ. Nacional de Quilmes - Bernal, Buenos Aires, Argentina. [B] CONICET: Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina. [C] Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
 - 
EN
EN

In this talk decision-making and temporal tasks inspired by Boolean functions are analyzed, exploring connectivity patterns, dynamics, and biological constraints of recurrent neural networks (RNNs) after training. The focus of such models in Computational Neuroscience is brain regions such as the cortex and prefrontal cortex and their recurrent connections related with different cognitive tasks. Understanding the dynamics behind these models is crucial for building hypotheses about brain function and explaining experimental results.

Dynamics is analyzed through numerical simulations, and the results are classified and interpreted. The study sheds light on the multiplicity of solutions for the same tasks and the link between the spectra of linearized trained networks and the dynamics of their counterparts. The distribution of eigenvalues of the recurrent weight matrix was studied and related to the dynamics in each task. Approaches and methods based on trained networks are presented. The importance of having a software framework that facilitates testing different hypotheses and constraints is also emphasized.

Cecilia Jarne did her PhD in physics at the IFLP and the Physics Department of the National University of La Plata and a PostDoc at the Buenos Aires University Physics department (IFIBA). Her research experience is based on large data sets analysis, programming and modelling, first in high-energy cosmic ray physics and then during her postdoctoral research analyzing bird songs and dynamics.  Currently, she is an assistant researcher and Professor at Universidad Nacional de Quilmes and  CONICET working on Recurrent Neural Networks and Complex Systems since 2018. During 2023 she is doing a research stay at the CFIN in Aarhus, Denmark.

In this talk decision-making and temporal tasks inspired by Boolean functions are analyzed, exploring connectivity patterns, dynamics, and biological constraints of recurrent neural networks (RNNs) after training. The focus of such models in Computational Neuroscience is brain regions such as the cortex and prefrontal cortex and their recurrent connections related with different cognitive tasks. Understanding the dynamics behind these models is crucial for building hypotheses about brain function and explaining experimental results.

Dynamics is analyzed through numerical simulations, and the results are classified and interpreted. The study sheds light on the multiplicity of solutions for the same tasks and the link between the spectra of linearized trained networks and the dynamics of their counterparts. The distribution of eigenvalues of the recurrent weight matrix was studied and related to the dynamics in each task. Approaches and methods based on trained networks are presented. The importance of having a software framework that facilitates testing different hypotheses and constraints is also emphasized.

Cecilia Jarne did her PhD in physics at the IFLP and the Physics Department of the National University of La Plata and a PostDoc at the Buenos Aires University Physics department (IFIBA). Her research experience is based on large data sets analysis, programming and modelling, first in high-energy cosmic ray physics and then during her postdoctoral research analyzing bird songs and dynamics.  Currently, she is an assistant researcher and Professor at Universidad Nacional de Quilmes and  CONICET working on Recurrent Neural Networks and Complex Systems since 2018. During 2023 she is doing a research stay at the CFIN in Aarhus, Denmark.

 
28/Mar/2023
28/Mar/2023

An epistemic approach to model uncertainty in data-graphs
An epistemic approach to model uncertainty in data-graphs

Nina Pardal
Nina Pardal
 - 
Department of Computer Science, University of Sheffield, UK
Department of Computer Science, University of Sheffield, UK
 - 
EN
EN

Graph databases are becoming widely successful as data models that allow to effectively represent and process complex relationships among various types of data. Data-graphs are particular types of graph databases whose representation allows both data values in the paths and in the nodes to be treated as first class citizens by the query language. As with any other type of data repository, data-graphs may suffer from errors and discrepancies with respect to the real-world data they intend to represent. In this talk, we explore the notion of probabilistic unclean data-graphs in order to capture the idea that the observed (unclean) data-graph is actually the noisy version of a clean one that correctly models the world, but that we know of partially. As the factors that yield to such observation heavily depend on the application domain and may be the result of different types of clerical errors or unintended transformations of the data, we consider an epistemic probabilistic model that describes the distribution over all possible ways in which the clean (uncertain) data-graph could have been polluted. Based on this model, we study data cleaning and probabilistic query answering for this framework and present complexity results when the transformation of the data-graph is caused by either removing (subset), adding (superset), or modifying (update) nodes and edges.

Dr. Nina Pardal is currently a Research Associate at the Department of Computer Science of the University of Sheffield, in the UK. She obtained a PhD in Mathematics from the University of Buenos Aires, Argentina, and a PhD in Computer Science from the University Paris-Nord, France. Her research interests lie in the areas of Graph Theory, Logic and Computability, Complexity, and Knowledge and Reasoning.

 

Graph databases are becoming widely successful as data models that allow to effectively represent and process complex relationships among various types of data. Data-graphs are particular types of graph databases whose representation allows both data values in the paths and in the nodes to be treated as first class citizens by the query language. As with any other type of data repository, data-graphs may suffer from errors and discrepancies with respect to the real-world data they intend to represent. In this talk, we explore the notion of probabilistic unclean data-graphs in order to capture the idea that the observed (unclean) data-graph is actually the noisy version of a clean one that correctly models the world, but that we know of partially. As the factors that yield to such observation heavily depend on the application domain and may be the result of different types of clerical errors or unintended transformations of the data, we consider an epistemic probabilistic model that describes the distribution over all possible ways in which the clean (uncertain) data-graph could have been polluted. Based on this model, we study data cleaning and probabilistic query answering for this framework and present complexity results when the transformation of the data-graph is caused by either removing (subset), adding (superset), or modifying (update) nodes and edges.

Dr. Nina Pardal is currently a Research Associate at the Department of Computer Science of the University of Sheffield, in the UK. She obtained a PhD in Mathematics from the University of Buenos Aires, Argentina, and a PhD in Computer Science from the University Paris-Nord, France. Her research interests lie in the areas of Graph Theory, Logic and Computability, Complexity, and Knowledge and Reasoning.

 

 
21/Mar/2023
21/Mar/2023

Back to the future: symbolic reasoning for intelligent agents
Back to the future: symbolic reasoning for intelligent agents

Maria Vanina Martinez
Maria Vanina Martinez
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

Much is being said about how knowledge representation and reasoning models are supposed to play a very important part in the future development of Intelligent Systems. In this talk, I will argue that they can serve as the founding formal structure for the construction of such systems in complex (multi-agent) settings. I will show part of my research trajectory in formal modelling of knowledge dynamics, representing and reasoning with inconsistent knowledge, and how these concepts are at the core of improving Intelligent Systems' reasoning capabilities.

Dr Maria Vanina Martinez obtained her PhD at the University of Maryland College Park and pursued her postdoctoral studies at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is a Ramon and Cajal Fellow at the Artificial Research Institute IIIA-CSIC in Barcelona, Spain. Her research is in the area of ​​knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.

Much is being said about how knowledge representation and reasoning models are supposed to play a very important part in the future development of Intelligent Systems. In this talk, I will argue that they can serve as the founding formal structure for the construction of such systems in complex (multi-agent) settings. I will show part of my research trajectory in formal modelling of knowledge dynamics, representing and reasoning with inconsistent knowledge, and how these concepts are at the core of improving Intelligent Systems' reasoning capabilities.

Dr Maria Vanina Martinez obtained her PhD at the University of Maryland College Park and pursued her postdoctoral studies at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is a Ramon and Cajal Fellow at the Artificial Research Institute IIIA-CSIC in Barcelona, Spain. Her research is in the area of ​​knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.

 
29/Nov/2022
29/Nov/2022

Paradigm Shift in the Engineering Design Cycle
Paradigm Shift in the Engineering Design Cycle

Dr. Cihan Ates
Dr. Cihan Ates
 - 
Karlsruhe Institute of Technology (KIT), Germany
Karlsruhe Institute of Technology (KIT), Germany
 - 
EN
EN

Engineers have been dealing with massive amounts of data accumulated over decades of fundamental experiments and field measurements, vitalized in the form of cleverly organized charts, tables and heuristic laws. In the last few decades, our capability to generate data has increased even further with the developments in (i) the digital measurement techniques including sensing technologies, (ii) computational power, (iii) faster, easier and cheaper data transfer and storage and (iv) post-processing tools and algorithms. On the other side, the problems that are needed to be addressed today, such as the food-water-energy security, pandemics and diseases, or global warming, are massive and at a completely different scale. More drastically, we have comparably much less time to find sustainable solutions. Therefore, we need a paradigm shift in how to interpret the data we collect and solve our problems, which can speed up our hypothesis test cycle. In this talk, we will visit some case studies relevant to the energy problem and discuss how the expertise of AI specialists can tip the scales in our favour.  

Cihan is a junior research group leader at KIT, under the Institute of Thermal Turbo Machinery, Multiphase Flow & Combustion group. With his group, he is working on the design and optimization of energy intensive processes. He is also a PI at the Graduate School Computational and Data Science and KIT Emerging Field of Health Technologies.

Engineers have been dealing with massive amounts of data accumulated over decades of fundamental experiments and field measurements, vitalized in the form of cleverly organized charts, tables and heuristic laws. In the last few decades, our capability to generate data has increased even further with the developments in (i) the digital measurement techniques including sensing technologies, (ii) computational power, (iii) faster, easier and cheaper data transfer and storage and (iv) post-processing tools and algorithms. On the other side, the problems that are needed to be addressed today, such as the food-water-energy security, pandemics and diseases, or global warming, are massive and at a completely different scale. More drastically, we have comparably much less time to find sustainable solutions. Therefore, we need a paradigm shift in how to interpret the data we collect and solve our problems, which can speed up our hypothesis test cycle. In this talk, we will visit some case studies relevant to the energy problem and discuss how the expertise of AI specialists can tip the scales in our favour.  

Cihan is a junior research group leader at KIT, under the Institute of Thermal Turbo Machinery, Multiphase Flow & Combustion group. With his group, he is working on the design and optimization of energy intensive processes. He is also a PI at the Graduate School Computational and Data Science and KIT Emerging Field of Health Technologies.

 
24/Nov/2022
24/Nov/2022

Smart Traffic Control for the Era of Autonomous Driving
Smart Traffic Control for the Era of Autonomous Driving

Associate Professor Dongmo Zhang
Associate Professor Dongmo Zhang
 - 
Western Sydney University, Australia
Western Sydney University, Australia
 - 
EN
EN

Over the last decade, the research on autonomous vehicles (AVs) has made revolutionary progress, which brings us hope of safer, more convenient, and more efficient means of transportation. Most significantly, the advance of artificial intelligence (AI), especially machine learning, allows a self-driving car to learn and adapt to complex road situations with millions of accumulated driving hours, which are way higher than any experienced human driver can reach. However, autonomous vehicles on roads also introduce new challenges to traffic management, especially when we allow them to travel mixed with human driving vehicles.

New theories for better understanding of the new era of transportation and new technologies for smart roadside infrastructures and intelligent traffic control are crucial for development and deployment of autonomous vehicles. This presentation will discuss some of these challenges, especially the social aspects of autonomous driving, including interaction between autonomous vehicles and roadside infrastructures, mechanisms of traffic management, the price of anarchy in road networks and automated negotiation between vehicles.

Dongmo Zhang is an Associate Professor in Computer Science and Associate Dean Graduate Studies in School of Computer, Data and Mathematical Sciences at Western Sydney University. He is a leading researcher in Artificial Intelligence, working in a wide range of areas, including multi-agent systems, strategic reasoning, automated negotiation, belief revision, reasoning about action, auctions, trading agent design etc. He has published around 150 papers in international journals and conferences, including the top AI Journals, such as AIJ, AAMAS & JAIR, and the top AI conferences, such as IJCAI, AAAI & AAMAS.  He has been an area chair, senior PC or PC for many top AI conferences, IJCAI, AAAI, ECAI, PRICAI, AJCAI, AAMAS, KR&R etc. He and his research team have also received several international awards, such champions of Trading Agent Competitions and best paper awards.

Over the last decade, the research on autonomous vehicles (AVs) has made revolutionary progress, which brings us hope of safer, more convenient, and more efficient means of transportation. Most significantly, the advance of artificial intelligence (AI), especially machine learning, allows a self-driving car to learn and adapt to complex road situations with millions of accumulated driving hours, which are way higher than any experienced human driver can reach. However, autonomous vehicles on roads also introduce new challenges to traffic management, especially when we allow them to travel mixed with human driving vehicles.

New theories for better understanding of the new era of transportation and new technologies for smart roadside infrastructures and intelligent traffic control are crucial for development and deployment of autonomous vehicles. This presentation will discuss some of these challenges, especially the social aspects of autonomous driving, including interaction between autonomous vehicles and roadside infrastructures, mechanisms of traffic management, the price of anarchy in road networks and automated negotiation between vehicles.

Dongmo Zhang is an Associate Professor in Computer Science and Associate Dean Graduate Studies in School of Computer, Data and Mathematical Sciences at Western Sydney University. He is a leading researcher in Artificial Intelligence, working in a wide range of areas, including multi-agent systems, strategic reasoning, automated negotiation, belief revision, reasoning about action, auctions, trading agent design etc. He has published around 150 papers in international journals and conferences, including the top AI Journals, such as AIJ, AAMAS & JAIR, and the top AI conferences, such as IJCAI, AAAI & AAMAS.  He has been an area chair, senior PC or PC for many top AI conferences, IJCAI, AAAI, ECAI, PRICAI, AJCAI, AAMAS, KR&R etc. He and his research team have also received several international awards, such champions of Trading Agent Competitions and best paper awards.

 
22/Nov/2022
22/Nov/2022

From implicative reducts to Mundici’s functor
From implicative reducts to Mundici’s functor

Valeria Giustarini
Valeria Giustarini
 - 
University of Siena
University of Siena
 - 
EN
EN

The connection between substructural logics and residuated lattices is one of the most relevant results of algebraic logic. Indeed, it establishes a framework where different systems, or equivalently, classes of structures, can be both compared and studied uniformly. Among the most well-known connections among different structures in this framework surely stands Mundici’s theorem, which establishes a categorical equivalence between the algebraic category of MV-algebras and lattice-ordered abelian groups (abelian l-groups in what follows) with strong order unit (an archimedean element with respect to the lattice order), with unit preserving homomorphisms. This equivalence, connecting the equivalent algebraic semantics of infinite-valued Lukasiewicz logic (i.e., MV-algebras) with ordered groups, has been deeply investigated and also extended to more general structures.

Alternative algebraic approaches to Mundici’s functor have been proposed by other authors. In the present contribution we re-elaborate Rump’s work, which is inspired by Bosbach’s idea, and focuses on structures with only one implication and a constant (whereas Bosbach’s cone algebras have two implications). The key idea is to characterize which structures in this reduced signature embed in an l-group. We find conditions that are different (albeit equivalent) to the ones found by Rump, and moreover we extend some of Rump’s constructions to categorical equivalences of the algebraic categories involved.

Valeria Giustarini is a Master Student at the Department of Information Engineering and Mathematics, University of Siena.

This is a specialized seminar organized by the Logic department. If you want to participate in this seminar, please contact with Tommaso Flaminio <tommaso@iiia.csic.es>

The seminar has two parts. The first will be from 10:00 to 12:00 and the second from 14:00 to 16:00.

The connection between substructural logics and residuated lattices is one of the most relevant results of algebraic logic. Indeed, it establishes a framework where different systems, or equivalently, classes of structures, can be both compared and studied uniformly. Among the most well-known connections among different structures in this framework surely stands Mundici’s theorem, which establishes a categorical equivalence between the algebraic category of MV-algebras and lattice-ordered abelian groups (abelian l-groups in what follows) with strong order unit (an archimedean element with respect to the lattice order), with unit preserving homomorphisms. This equivalence, connecting the equivalent algebraic semantics of infinite-valued Lukasiewicz logic (i.e., MV-algebras) with ordered groups, has been deeply investigated and also extended to more general structures.

Alternative algebraic approaches to Mundici’s functor have been proposed by other authors. In the present contribution we re-elaborate Rump’s work, which is inspired by Bosbach’s idea, and focuses on structures with only one implication and a constant (whereas Bosbach’s cone algebras have two implications). The key idea is to characterize which structures in this reduced signature embed in an l-group. We find conditions that are different (albeit equivalent) to the ones found by Rump, and moreover we extend some of Rump’s constructions to categorical equivalences of the algebraic categories involved.

Valeria Giustarini is a Master Student at the Department of Information Engineering and Mathematics, University of Siena.

This is a specialized seminar organized by the Logic department. If you want to participate in this seminar, please contact with Tommaso Flaminio <tommaso@iiia.csic.es>

The seminar has two parts. The first will be from 10:00 to 12:00 and the second from 14:00 to 16:00.

 
22/Nov/2022
22/Nov/2022

Value alignment in human-AI societies
Value alignment in human-AI societies

Enrico Liscio
Enrico Liscio
 - 
TU Delft
TU Delft
 - 
EN
EN

As artificial agents become increasingly embedded in our society, we must ensure that they align with our human values, both at a level of individual interactions and system governance. However, agents must first be able to infer our values, i.e., understand how we prioritize values in different situations, both as individuals and as society. In this talk we explore how artificial agents can infer our values, while helping us reason about them. How can artificial agents understand our deepest motivations, when we are often not even aware of them?

Enrico Liscio is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. His research focuses on Natural Language Processing techniques to estimate human values from text. His work is part of the project to achieve high-quality online mass deliberation, creating AI-supported tools and environments aimed at transforming online conversations into more constructive and inclusive dialogues.

As artificial agents become increasingly embedded in our society, we must ensure that they align with our human values, both at a level of individual interactions and system governance. However, agents must first be able to infer our values, i.e., understand how we prioritize values in different situations, both as individuals and as society. In this talk we explore how artificial agents can infer our values, while helping us reason about them. How can artificial agents understand our deepest motivations, when we are often not even aware of them?

Enrico Liscio is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. His research focuses on Natural Language Processing techniques to estimate human values from text. His work is part of the project to achieve high-quality online mass deliberation, creating AI-supported tools and environments aimed at transforming online conversations into more constructive and inclusive dialogues.

 
15/Nov/2022
15/Nov/2022

How to Save Democracy (or Towards Model Checking of E-Voting Protocols in Alternating-time Temporal Logic)
How to Save Democracy (or Towards Model Checking of E-Voting Protocols in Alternating-time Temporal Logic)

Wojtek Jamroga
Wojtek Jamroga
 - 
University of Luxembourg and Polish Academy of Sciences
University of Luxembourg and Polish Academy of Sciences
 - 
EN
EN

Properties of coercion resistance and voter verifiability refer to the existence of an appropriate strategy for the voter, the coercer, or both. One can try to specify such properties by formulae of a suitable strategic logic. However, automated verification of strategic properties is notoriously hard, and novel techniques are needed to overcome the complexity.

I will start with an overview of the relevant properties, show how they can be specified, and present some new results for model checking of strategic properties.

Wojtek Jamroga is an associate professor at the Polish Academy of Sciences and a research scientist at the University of Luxembourg. His research focuses on modeling, specification and verification of interaction between agents. He has coauthored over 100 refereed publications, and has been a Program Committee member of most important conferences and workshops in AI and multi-agent systems. His research track includes the Best Paper Award at the main conference on electronic voting (E-VOTE-ID) in 2016, and a Best Paper Nomination at the main multi-agent systems conference (AAMAS) in 2018.

Properties of coercion resistance and voter verifiability refer to the existence of an appropriate strategy for the voter, the coercer, or both. One can try to specify such properties by formulae of a suitable strategic logic. However, automated verification of strategic properties is notoriously hard, and novel techniques are needed to overcome the complexity.

I will start with an overview of the relevant properties, show how they can be specified, and present some new results for model checking of strategic properties.

Wojtek Jamroga is an associate professor at the Polish Academy of Sciences and a research scientist at the University of Luxembourg. His research focuses on modeling, specification and verification of interaction between agents. He has coauthored over 100 refereed publications, and has been a Program Committee member of most important conferences and workshops in AI and multi-agent systems. His research track includes the Best Paper Award at the main conference on electronic voting (E-VOTE-ID) in 2016, and a Best Paper Nomination at the main multi-agent systems conference (AAMAS) in 2018.

 
21/Sep/2022
21/Sep/2022

Qualitative methods for studying human-robot interaction
Qualitative methods for studying human-robot interaction

Miquel Domènech y Núria Vallès
Miquel Domènech y Núria Vallès
 - 
IRI – Instituto de Robótica e Informática Industrial
IRI – Instituto de Robótica e Informática Industrial
 - 
EN
EN

Este workshop se enmarca en un ciclo denominado «AIHUB Research Methodology Training» (Metodologías de investigación AIHUB) que brinda oportunidades de formación en metodologías de investigación relacionadas con la IA, la robótica y la ciencia de datos al personal pre y postdoctoral en formación. En el primer workshop se explorará el uso de métodos de investigación cualitativos en la investigación de la interacción humano-robot con el fin de mejorar el diseño y la comprensión del sistema socio-técnico emergente.

Miquel Domènech es Profesor Titular de Psicología Social en la Universitat Autónoma de Barcelona. Es miembro fundador y coordinador del Barcelona Science and Technology Studies Group (STS-b), grupo de investigación reconocido por la Generalitat de Cataluña. Sus intereses de investigación se enmarcan en el campo de los estudios de la ciencia y la tecnología, con un énfasis especial en las temáticas relacionadas con el uso de la tecnología en los procesos de cuidado y en la participación ciudadana en asuntos tecnocientíficos.

Núria Vallès Peris es socióloga, investigadora del grupo Barcelona Science and Technology Studies Group (STS-b) de la UAB. Actualmente investigadora postdoctoral en el Intelligent Data Science and Artificial Intelligence Research Center (IDEAI) de la UPC. Su aproximación se enmarca en los estudios de la ciencia y la tecnología, y la filosofía de la tecnología. Su investigación se ha focalizado en las controversias éticas, políticas y sociales en torno a la robótica y la inteligencia artificial, especialmente en el ámbito de la salud y los cuidados. Está interesada en el estudio de los imaginarios, el diseño de las tecnologías y los procesos de democratización de la tecnociencia.

Este workshop se enmarca en un ciclo denominado «AIHUB Research Methodology Training» (Metodologías de investigación AIHUB) que brinda oportunidades de formación en metodologías de investigación relacionadas con la IA, la robótica y la ciencia de datos al personal pre y postdoctoral en formación. En el primer workshop se explorará el uso de métodos de investigación cualitativos en la investigación de la interacción humano-robot con el fin de mejorar el diseño y la comprensión del sistema socio-técnico emergente.

Miquel Domènech es Profesor Titular de Psicología Social en la Universitat Autónoma de Barcelona. Es miembro fundador y coordinador del Barcelona Science and Technology Studies Group (STS-b), grupo de investigación reconocido por la Generalitat de Cataluña. Sus intereses de investigación se enmarcan en el campo de los estudios de la ciencia y la tecnología, con un énfasis especial en las temáticas relacionadas con el uso de la tecnología en los procesos de cuidado y en la participación ciudadana en asuntos tecnocientíficos.

Núria Vallès Peris es socióloga, investigadora del grupo Barcelona Science and Technology Studies Group (STS-b) de la UAB. Actualmente investigadora postdoctoral en el Intelligent Data Science and Artificial Intelligence Research Center (IDEAI) de la UPC. Su aproximación se enmarca en los estudios de la ciencia y la tecnología, y la filosofía de la tecnología. Su investigación se ha focalizado en las controversias éticas, políticas y sociales en torno a la robótica y la inteligencia artificial, especialmente en el ámbito de la salud y los cuidados. Está interesada en el estudio de los imaginarios, el diseño de las tecnologías y los procesos de democratización de la tecnociencia.

 
14/Jun/2022
14/Jun/2022

Building Contrastive Explanations for Multi-Agent Team Formation
Building Contrastive Explanations for Multi-Agent Team Formation

Athina Georgara
Athina Georgara
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

It is undenyable that more and more hard and complex procedures are being automated with the aid of artificial intelligence, having led to an era where artificial intelligence can be practically found in any system. As such, it is more and more common that people make decisions guided by the suggestions and recommendations of some intelligent system. As these systems support everyday life’s decisions they unavoidably make people curious about their functionality.Thus, the need for humans to understand the rationale behind AI decisions becomes imperative. 

Adequate explanations for decisions made by an intelligent system do not just help describing how the system works, they also earn users’ trust. In this work we focus on a general methodology for justifying why certain teams are formed and others are not by a team formation algorithm (TFA). Specifically, we introduce an algorithm that wraps up any existing TFA and builds justifications regarding the teams formed by such TFA. This is done without modifying the TFA in any way. Our algorithm offers users a collection of commonly-asked questions within a team formation scenario and builds justifications as contrastive explanations. We also report on an empirical evaluation to determine the quality of the explanations provided by our algorithm.

Athina Georgara is currently a PhD candidate in Autonoma Unoversity of Barcelona in collaboration with the Artificial Intelligence Research Institute under the supervision of professors Carles Sierra and Juan A. Rodríguez-Aguilar. Her PhD studies are funded by the consulting company Enzyme Advising Group, where she is employeed during her studies. Athina completed her undergraduate studies and acquired a diploma degree at the school of Electrical and Computer Engineering in Technical University of Crete, and she acquired an M. Sc. in Electronic and Computer Engineering in the same school under the supervision of associate professor Georgios Chalkiadakis.

The scope of her research lies on team formation and task allocation. She works towards automating the process of forming efficient teams for assigning them to tasks combining findings from organisational psychology and social sciences. Due to her prior engagement on the fields Athina also holds interest on Algorithmic Game Theory and Machine Learning, along with their implementation in Multi-agent Systems.

It is undenyable that more and more hard and complex procedures are being automated with the aid of artificial intelligence, having led to an era where artificial intelligence can be practically found in any system. As such, it is more and more common that people make decisions guided by the suggestions and recommendations of some intelligent system. As these systems support everyday life’s decisions they unavoidably make people curious about their functionality.Thus, the need for humans to understand the rationale behind AI decisions becomes imperative. 

Adequate explanations for decisions made by an intelligent system do not just help describing how the system works, they also earn users’ trust. In this work we focus on a general methodology for justifying why certain teams are formed and others are not by a team formation algorithm (TFA). Specifically, we introduce an algorithm that wraps up any existing TFA and builds justifications regarding the teams formed by such TFA. This is done without modifying the TFA in any way. Our algorithm offers users a collection of commonly-asked questions within a team formation scenario and builds justifications as contrastive explanations. We also report on an empirical evaluation to determine the quality of the explanations provided by our algorithm.

Athina Georgara is currently a PhD candidate in Autonoma Unoversity of Barcelona in collaboration with the Artificial Intelligence Research Institute under the supervision of professors Carles Sierra and Juan A. Rodríguez-Aguilar. Her PhD studies are funded by the consulting company Enzyme Advising Group, where she is employeed during her studies. Athina completed her undergraduate studies and acquired a diploma degree at the school of Electrical and Computer Engineering in Technical University of Crete, and she acquired an M. Sc. in Electronic and Computer Engineering in the same school under the supervision of associate professor Georgios Chalkiadakis.

The scope of her research lies on team formation and task allocation. She works towards automating the process of forming efficient teams for assigning them to tasks combining findings from organisational psychology and social sciences. Due to her prior engagement on the fields Athina also holds interest on Algorithmic Game Theory and Machine Learning, along with their implementation in Multi-agent Systems.

 
07/Jun/2022
07/Jun/2022

Towards Pluralistic Value Alignment: Aggregating Value Systems through ℓp-Regression
Towards Pluralistic Value Alignment: Aggregating Value Systems through ℓp-Regression

Roger Xavier Lera Leri
Roger Xavier Lera Leri
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

Dealing with the challenges of an interconnected globalised world requires to handle plurality. This is no exception when considering value-aligned intelligent systems, since the values to align with should capture this plurality. So far, most literature on value-alignment has just considered a single value system. Thus, in this talk I will discuss a method for the aggregation of value systems. By exploiting recent results in the social choice literature, we formalise our aggregation problem as an optimisation problem, more concretely, as an ℓp-regression problem. Moreover, our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness).

Roger Lera finished the BSc in Physics in the University of Barcelona in 2020. He is currently a Ph.D. student at the Artificial Intelligence Research Institute (IIIA-CSIC) in Bellaterra, Spain. His research interest are ethics & AI, Explainable AI and combinatorial optimisation problems for real-world applications.

Dealing with the challenges of an interconnected globalised world requires to handle plurality. This is no exception when considering value-aligned intelligent systems, since the values to align with should capture this plurality. So far, most literature on value-alignment has just considered a single value system. Thus, in this talk I will discuss a method for the aggregation of value systems. By exploiting recent results in the social choice literature, we formalise our aggregation problem as an optimisation problem, more concretely, as an ℓp-regression problem. Moreover, our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness).

Roger Lera finished the BSc in Physics in the University of Barcelona in 2020. He is currently a Ph.D. student at the Artificial Intelligence Research Institute (IIIA-CSIC) in Bellaterra, Spain. His research interest are ethics & AI, Explainable AI and combinatorial optimisation problems for real-world applications.

 
01/Jun/2022
01/Jun/2022

Towards a logico-algebraic setting for counterfactual conditionals
Towards a logico-algebraic setting for counterfactual conditionals

Giuliano Rosella
Giuliano Rosella
 - 
Universidad de Turin
Universidad de Turin
 - 
EN
EN

In the present seminar, we present a class of algebras obtained by adding a normal modality to Boolean algebras for conditionals so as to provide an algebraic setting for the logic C1 for counterfactual conditionals, axiomatized by Lewis. These modal algebras, that we name “Lewis algebras”, are particular Boolean algebras with operators and, as such, allow a dual relational counterpart that will be called Lewis frames. The main results of this paper show that: (1) Lewis algebras and Lewis frames provide a sound semantics for Lewis logic C1; (2) Lewis’ original sphere semantics for counterfactuals can actually be defined from Lewis frames, and hence, from Lewis algebras. Finally, we will present a new logic for counterfactuals that, taking inspiration from the definition of Lewis algebras, is obtained as a modal expansion of the recently introduced logic LBC to reason about Boolean conditionals.

 

NOTE: This is an specialized seminar. If you want to attend this seminar, please contact Tommaso  Flaminio (tommaso@iiia.csic.es).

 

In the present seminar, we present a class of algebras obtained by adding a normal modality to Boolean algebras for conditionals so as to provide an algebraic setting for the logic C1 for counterfactual conditionals, axiomatized by Lewis. These modal algebras, that we name “Lewis algebras”, are particular Boolean algebras with operators and, as such, allow a dual relational counterpart that will be called Lewis frames. The main results of this paper show that: (1) Lewis algebras and Lewis frames provide a sound semantics for Lewis logic C1; (2) Lewis’ original sphere semantics for counterfactuals can actually be defined from Lewis frames, and hence, from Lewis algebras. Finally, we will present a new logic for counterfactuals that, taking inspiration from the definition of Lewis algebras, is obtained as a modal expansion of the recently introduced logic LBC to reason about Boolean conditionals.

 

NOTE: This is an specialized seminar. If you want to attend this seminar, please contact Tommaso  Flaminio (tommaso@iiia.csic.es).

 

 
31/May/2022
31/May/2022

Some shallow remarks on the use of values in artificial autonomous systems
Some shallow remarks on the use of values in artificial autonomous systems

Pablo Noriega
Pablo Noriega
 - 
IIIA-CSIC
IIIA-CSIC
 - 
EN
EN

I propose to peek into the possibility of using moral values as a device to harness the autonomy of artificial systems. The talk should outline the challenge of developing a theory of values that has a distinctive AI bias: its motivation, the foundational questions, the distinctive features, the potential artefacts, the methodological challenges, and the practical consequences of such a theory. Fortunately for everyone, it will not. The talk will only look into a restricted understanding of the problem of embedding values into the governance of autonomous systems. In fact, I will only pay attention to some of the obvious practical problems one needs to overcome if one intends to claim that an autonomous system is aligned with a particular set of values. Hopefully, this timid approach will reveal enough of the breath and beauty of an artificial axiology to justify taking a closer look into it.

 

Pablo Noriega is a tenured scientist of the IIIA. His main research interest is in the governance of open multiagent systems. This talk reflects recent collaboration with Mark d'Inverno (Goldsmiths, U.of London), Julian Padget (U. of Bath), Enric Plaza (IIIA), Harko Verhagen (Stockholm U.) and Toni Perello-Moragues.

I propose to peek into the possibility of using moral values as a device to harness the autonomy of artificial systems. The talk should outline the challenge of developing a theory of values that has a distinctive AI bias: its motivation, the foundational questions, the distinctive features, the potential artefacts, the methodological challenges, and the practical consequences of such a theory. Fortunately for everyone, it will not. The talk will only look into a restricted understanding of the problem of embedding values into the governance of autonomous systems. In fact, I will only pay attention to some of the obvious practical problems one needs to overcome if one intends to claim that an autonomous system is aligned with a particular set of values. Hopefully, this timid approach will reveal enough of the breath and beauty of an artificial axiology to justify taking a closer look into it.

 

Pablo Noriega is a tenured scientist of the IIIA. His main research interest is in the governance of open multiagent systems. This talk reflects recent collaboration with Mark d'Inverno (Goldsmiths, U.of London), Julian Padget (U. of Bath), Enric Plaza (IIIA), Harko Verhagen (Stockholm U.) and Toni Perello-Moragues.

 
26/May/2022
26/May/2022

AIcrowd : Crowdsourcing Artificial Intelligence to Solve Real-World problems
AIcrowd : Crowdsourcing Artificial Intelligence to Solve Real-World problems

Sharada Mohanty
Sharada Mohanty
 - 
AIcrowd
AIcrowd
 - 
EN
EN

Initially started as a project at EPFL, Switzerland, AIcrowd is a community of ~60,000 AI researchers all over the world, who come together to solve real world problems to win cash prizes, travel grants, co-authorships in research papers. At AIcrowd, we use competitions and benchmarks to build meaningful research communities which can come together while they collaborate and compete to push the state of art in Artificial Intelligence Research. The long term vision is to evolve into a giant distributed research lab, which celebrates community led research, for the community by the community. 

Sharada Mohanty is the CEO and Founder of AIcrowd, a platform for crowdsourcing Artificial Intelligence for real world problems. His research focuses on using Artificial Intelligence for diagnosing plant diseases, teaching simulated skeletons how to walk, scheduling trains in simulated railway networks, and on AI agents which can perform complex tasks in Minecraft. 

He is extremely passionate about benchmarks and building communities. He has led the design and execution of many large-scale machine learning competitions and benchmarks, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, NeurIPS 2019: MineRL Competition, NeurIPS 2019: Disentanglement Challenge, NeurIPS 2020: Flatland Competition, NeurIPS 2020: Procgen Competition, NeurIPS 2021 NetHack Challenge, to name a few.

During his Ph.D. at EPFL, he worked on numerous problems at the intersection of AI and health, with a strong interest in reinforcement learning. In his previous roles, he has worked at the Theoretical Physics department at CERN on crowdsourcing compute for PYTHIA powered Monte-Carlo simulations; he has had a brief stint at UNOSAT building GeoTag-X, a platform for crowdsourcing analysis of media coming out of disasters to assist in disaster relief efforts. In his current role, he focuses on building better engineering tools for AI researchers and making research in AI accessible to a larger community of engineers.

Initially started as a project at EPFL, Switzerland, AIcrowd is a community of ~60,000 AI researchers all over the world, who come together to solve real world problems to win cash prizes, travel grants, co-authorships in research papers. At AIcrowd, we use competitions and benchmarks to build meaningful research communities which can come together while they collaborate and compete to push the state of art in Artificial Intelligence Research. The long term vision is to evolve into a giant distributed research lab, which celebrates community led research, for the community by the community. 

Sharada Mohanty is the CEO and Founder of AIcrowd, a platform for crowdsourcing Artificial Intelligence for real world problems. His research focuses on using Artificial Intelligence for diagnosing plant diseases, teaching simulated skeletons how to walk, scheduling trains in simulated railway networks, and on AI agents which can perform complex tasks in Minecraft. 

He is extremely passionate about benchmarks and building communities. He has led the design and execution of many large-scale machine learning competitions and benchmarks, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, NeurIPS 2019: MineRL Competition, NeurIPS 2019: Disentanglement Challenge, NeurIPS 2020: Flatland Competition, NeurIPS 2020: Procgen Competition, NeurIPS 2021 NetHack Challenge, to name a few.

During his Ph.D. at EPFL, he worked on numerous problems at the intersection of AI and health, with a strong interest in reinforcement learning. In his previous roles, he has worked at the Theoretical Physics department at CERN on crowdsourcing compute for PYTHIA powered Monte-Carlo simulations; he has had a brief stint at UNOSAT building GeoTag-X, a platform for crowdsourcing analysis of media coming out of disasters to assist in disaster relief efforts. In his current role, he focuses on building better engineering tools for AI researchers and making research in AI accessible to a larger community of engineers.

 
17/May/2022
17/May/2022

A Framework for XAI in Socio-Technical Systems
A Framework for XAI in Socio-Technical Systems

Maria Vanina Martinez
Maria Vanina Martinez
 - 
University of Buenos Aires, Argentina
University of Buenos Aires, Argentina
 - 
EN
EN

With the availability of large datasets and ever-increasing computing power, there has been a growing use of data-driven Artificial Intelligence systems, which have shown their potential for successful application in diverse domains related to social platforms. However, many of these systems are not able to provide information about the rationale behind their decisions to their users. Lack of understanding of such decisions can be a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior in social platforms is a clear example. This phenomenon has many faces, which for instance appear in the form of bots, sock puppets, creation and dissemination of fake news, Sybil attacks, and actors hiding behind multiple identities. In this talk, we discuss  HEIST (Hybrid Explainable and Interpretable Socio-Technical systems), a framework for the implementation of intelligent socio-technical systems that are explainable by design, and study an instantiation for analysis of fake news dissemination.

 

Dr. Maria Vanina Martinez obtained her PhD at University of Maryland College Park and pursued her postdoctoral studies  at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is an adjunct researcher at CONICET as a member of the Institute for Research in Computer Science (ICC, UBA - CONICET) and an assistant professor at the Department of Computer Science at University of Buenos Aires, Argentina. In 2018 was selected by IEEE Intelligent Systems as one of the ten prominent researchers in AI to watch. In 2021 he received the National Academy of Exact, Physical and Natural Sciences Stimulus Award in the area of Engineering Sciences in Argentina. Her research is in the area of knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.

She is a member of the ethics committee of the Ministry of Science and Technology, has participated in various international events organized, among others, by UNESCO, UNIDIR, Pugwash, Sehlac (Human Security in Latin America and the Caribbean), the Campaign to stop killer robots, speaking about the benefits and challenges involved in the advancement of Artificial Intelligence.

With the availability of large datasets and ever-increasing computing power, there has been a growing use of data-driven Artificial Intelligence systems, which have shown their potential for successful application in diverse domains related to social platforms. However, many of these systems are not able to provide information about the rationale behind their decisions to their users. Lack of understanding of such decisions can be a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior in social platforms is a clear example. This phenomenon has many faces, which for instance appear in the form of bots, sock puppets, creation and dissemination of fake news, Sybil attacks, and actors hiding behind multiple identities. In this talk, we discuss  HEIST (Hybrid Explainable and Interpretable Socio-Technical systems), a framework for the implementation of intelligent socio-technical systems that are explainable by design, and study an instantiation for analysis of fake news dissemination.

 

Dr. Maria Vanina Martinez obtained her PhD at University of Maryland College Park and pursued her postdoctoral studies  at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is an adjunct researcher at CONICET as a member of the Institute for Research in Computer Science (ICC, UBA - CONICET) and an assistant professor at the Department of Computer Science at University of Buenos Aires, Argentina. In 2018 was selected by IEEE Intelligent Systems as one of the ten prominent researchers in AI to watch. In 2021 he received the National Academy of Exact, Physical and Natural Sciences Stimulus Award in the area of Engineering Sciences in Argentina. Her research is in the area of knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.

She is a member of the ethics committee of the Ministry of Science and Technology, has participated in various international events organized, among others, by UNESCO, UNIDIR, Pugwash, Sehlac (Human Security in Latin America and the Caribbean), the Campaign to stop killer robots, speaking about the benefits and challenges involved in the advancement of Artificial Intelligence.

 
10/May/2022
10/May/2022

Morphological classification of galaxies with deep learning
Morphological classification of galaxies with deep learning

Helena Domínguez Sánchez
Helena Domínguez Sánchez
 - 
Institute of Space Sciences (ICE-CSIC)
Institute of Space Sciences (ICE-CSIC)
 - 
EN
EN

Galaxies exhibit a wide variety of morphologies which are strongly related to their star formation histories. Having large samples of morphologically classified galaxies is fundamental to understand their formation and evolution. In this talk, I will review my research related to deep learning algorithms for morphological classification of galaxies which have resulted in the release of morphological catalogues for large international surveys such as SDSS, MaNGA or Dark Energy Survey. I will describe the methodology, based on  supervised learning and convolutional neural networks (CNN). The main disadvantage of such approach is the need of large labelled training samples which we overcome by applying transfer learning or by ‘emulating’ the faint galaxy population. 

Helena Domínguez Sánchez is a research fellow astrophysicist at Institute of Space Sciences (ICE-CSIC) trying to understand how and why the properties of galaxies have changed across the history of the Universe. During the last years, she has pioneered the use of Deep Learning techniques in astronomy. She  did her PhD in Bologna (2009-2012) and the she had several post-docs positions in UCM (Madrid), Paris Observatoire and University of Pennsylvania (USA). She is currently visiting the Instituto de Astrofísica de Canarias (IAC, Tenerife) for a semester and she just accepted a tenure track position at Centro de Estudios de Física del Cosmos de Aragón (CEFCA, Teruel), starting September 2022.

Galaxies exhibit a wide variety of morphologies which are strongly related to their star formation histories. Having large samples of morphologically classified galaxies is fundamental to understand their formation and evolution. In this talk, I will review my research related to deep learning algorithms for morphological classification of galaxies which have resulted in the release of morphological catalogues for large international surveys such as SDSS, MaNGA or Dark Energy Survey. I will describe the methodology, based on  supervised learning and convolutional neural networks (CNN). The main disadvantage of such approach is the need of large labelled training samples which we overcome by applying transfer learning or by ‘emulating’ the faint galaxy population. 

Helena Domínguez Sánchez is a research fellow astrophysicist at Institute of Space Sciences (ICE-CSIC) trying to understand how and why the properties of galaxies have changed across the history of the Universe. During the last years, she has pioneered the use of Deep Learning techniques in astronomy. She  did her PhD in Bologna (2009-2012) and the she had several post-docs positions in UCM (Madrid), Paris Observatoire and University of Pennsylvania (USA). She is currently visiting the Instituto de Astrofísica de Canarias (IAC, Tenerife) for a semester and she just accepted a tenure track position at Centro de Estudios de Física del Cosmos de Aragón (CEFCA, Teruel), starting September 2022.

 
03/May/2022
03/May/2022

All chips great and small
All chips great and small

Luis Fonseca
Luis Fonseca
 - 
Institute of Microelectronics of Barcelona (IMB-CNM, CSIC)
Institute of Microelectronics of Barcelona (IMB-CNM, CSIC)
 - 
EN
EN

75 years ago the transistor was invented. In hindsight, that moment can be considered the big bang of the Information Society we are living in today. The recent semiconductor crisis has shown how important chips are in our world. However, it is relatively unknown what chip-making entails. Moreover, chips come in many forms. Leveraging on IMB-CNM activities, I would like to show that miniaturization and scalability makes possible not only place chips inside computers and smartphones, but also to deploy microdevices in so demanding and so far apart scenarios as inside living cells and on-board of space missions.

Luis Fonseca has developed his scientific career at the Institute of Microelectronics of Barcelona. Physicist by training he joined IMB-CNM in 1989 as a predoc and he is today its current director. His scientific interests have revolved about micro and nanotechnologies for gas sensing and energy harvesting.

75 years ago the transistor was invented. In hindsight, that moment can be considered the big bang of the Information Society we are living in today. The recent semiconductor crisis has shown how important chips are in our world. However, it is relatively unknown what chip-making entails. Moreover, chips come in many forms. Leveraging on IMB-CNM activities, I would like to show that miniaturization and scalability makes possible not only place chips inside computers and smartphones, but also to deploy microdevices in so demanding and so far apart scenarios as inside living cells and on-board of space missions.

Luis Fonseca has developed his scientific career at the Institute of Microelectronics of Barcelona. Physicist by training he joined IMB-CNM in 1989 as a predoc and he is today its current director. His scientific interests have revolved about micro and nanotechnologies for gas sensing and energy harvesting.

 
26/Apr/2022
26/Apr/2022

The Influence of Social Motivation on Decision-Making between Precision Movements: A Computational Approach
The Influence of Social Motivation on Decision-Making between Precision Movements: A Computational Approach

Ignasi Cos
Ignasi Cos
 - 
Universitat de Barcelona
Universitat de Barcelona
 - 
EN
EN

Reward is a foundation of behaviour: we move to attain valuable states. However, moving towards those states implies investing some effort and deploying motor strategies that are very much dependent on the person’s motivation. We performed a decision-making task in with human participants had to accumulate reward by selecting one of two reaching movements of opposite motor cost, to be performed precisely. Our results show that performance and social status were taken into consideration by diminishing error as a function of the partner. This also transpired into an increase movement time between the baseline condition and any social condition. We interpret this as an adaptive process of trade-off between precision, reward and time. Other effects on the movement amplitude became significant when the skill of the companion player was clearly unattainable, such as a reduction of the amplitude, thus escaping the traditional context of the speed-accuracy trade-off. As a context for the study of motivation and motor adaptation we developed a model based on movement benefit and costs optimization. Remarkably, its predictions show that this optimization depends on the context where the movements and the choices are performed, incorporating motivation as part of its internal dynamics.

Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement. 

Reward is a foundation of behaviour: we move to attain valuable states. However, moving towards those states implies investing some effort and deploying motor strategies that are very much dependent on the person’s motivation. We performed a decision-making task in with human participants had to accumulate reward by selecting one of two reaching movements of opposite motor cost, to be performed precisely. Our results show that performance and social status were taken into consideration by diminishing error as a function of the partner. This also transpired into an increase movement time between the baseline condition and any social condition. We interpret this as an adaptive process of trade-off between precision, reward and time. Other effects on the movement amplitude became significant when the skill of the companion player was clearly unattainable, such as a reduction of the amplitude, thus escaping the traditional context of the speed-accuracy trade-off. As a context for the study of motivation and motor adaptation we developed a model based on movement benefit and costs optimization. Remarkably, its predictions show that this optimization depends on the context where the movements and the choices are performed, incorporating motivation as part of its internal dynamics.

Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement. 

 
05/Apr/2022
05/Apr/2022

Deep Neural Network Introspection
Deep Neural Network Introspection

Xavier Suau
Xavier Suau
 - 
Apple
Apple
 - 
EN
EN

Deep Neural Networks (DNNs) have achieved great success at solving numerous tasks, sometimes surpassing human performance. However, it is still not well understood how they represent data internally and what are the characteristics of these representations. In this talk we will present some research works that study internal representations of DNNs and leverage them for controlled text generation, representation learning and bias analysis.

Xavier Suau holds a PhD in Computer Vision and Machine Learning from BarcelonaTech. Before that, he graduated from BarcelonaTech in Telecommunications Engineering and from Supaéro (Toulouse, France) in Aeronautics and Space Engineering. He is currently a research scientist at Apple's ML Research team, where he conducts research in ML representation learning and robustness. Before joining Apple, Xavier was a co-founder of the start-up Gestoos, an AI centric company tackling human-machine interaction.

Deep Neural Networks (DNNs) have achieved great success at solving numerous tasks, sometimes surpassing human performance. However, it is still not well understood how they represent data internally and what are the characteristics of these representations. In this talk we will present some research works that study internal representations of DNNs and leverage them for controlled text generation, representation learning and bias analysis.

Xavier Suau holds a PhD in Computer Vision and Machine Learning from BarcelonaTech. Before that, he graduated from BarcelonaTech in Telecommunications Engineering and from Supaéro (Toulouse, France) in Aeronautics and Space Engineering. He is currently a research scientist at Apple's ML Research team, where he conducts research in ML representation learning and robustness. Before joining Apple, Xavier was a co-founder of the start-up Gestoos, an AI centric company tackling human-machine interaction.

 
29/Mar/2022
29/Mar/2022

Being a Data Scientist at DecathlonUK while Discovering and Interpreting Biased Concepts in Online Communities at King's College London
Being a Data Scientist at DecathlonUK while Discovering and Interpreting Biased Concepts in Online Communities at King's College London

Xavier Ferrer Aran
Xavier Ferrer Aran
 - 
Decathlon UK – King’s College London
Decathlon UK – King’s College London
 - 
EN
EN

In this talk I will describe what working as a data science at Decathlon is like. What are the daily tasks we face as Data Scientist, technologies used and methodologies applied, and present a few interesting projects we are currently working on. If time allows, we will then jump to our last paper accepted with the team @ King's College London, "Discovering and Interpreting Biased Concepts in Online Communities", in which I will present a data-driven method to automatically discover and help interpret biased concepts encoded in word embeddings in the context of NLP and AI Fairness and Algorithmic bias.

Xavier Ferrer Aran is a Data Scientist at Decathlon UK and Visiting Research Associate at King's College London. He obtained his PhD in Informatics in 2017 from the Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC) and the Universitat Autonoma de Barcelona (UAB). Afterwards, he worked as a Research Associate in Digital Discrimination at the Department of Informatics at King's College London, and 1 year ago he started to work as a Data Scientist at Decathlon UK until today. His research interests are at the intersection of applied natural language processing, machine learning and fairness.

In this talk I will describe what working as a data science at Decathlon is like. What are the daily tasks we face as Data Scientist, technologies used and methodologies applied, and present a few interesting projects we are currently working on. If time allows, we will then jump to our last paper accepted with the team @ King's College London, "Discovering and Interpreting Biased Concepts in Online Communities", in which I will present a data-driven method to automatically discover and help interpret biased concepts encoded in word embeddings in the context of NLP and AI Fairness and Algorithmic bias.

Xavier Ferrer Aran is a Data Scientist at Decathlon UK and Visiting Research Associate at King's College London. He obtained his PhD in Informatics in 2017 from the Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC) and the Universitat Autonoma de Barcelona (UAB). Afterwards, he worked as a Research Associate in Digital Discrimination at the Department of Informatics at King's College London, and 1 year ago he started to work as a Data Scientist at Decathlon UK until today. His research interests are at the intersection of applied natural language processing, machine learning and fairness.

 
22/Mar/2022
22/Mar/2022

Computerització i ètica: un debat sobre els reptes actuals
Computerització i ètica: un debat sobre els reptes actuals

Norbert Bilbeny Garcia
Norbert Bilbeny Garcia
 - 
Universitat de Barcelona
Universitat de Barcelona
 - 
CA
CA

L´ètica ha consistit, fins ara, en una relació dels éssers humans entre ells i guiada per ells mateixos, en condicions bàsiques de presencialitat, reciprocitat, discursivitat i intersubjectivitat, tot suposant en cada individu la capacitat de guiar la seva acció, més enllà de l´instint i dels interessos primaris, per l´aprenentatge social de pautes de conducta moral i l´aplicació d´aquestes a partir d´un procés individual de decisió basat en l´ús personal de les facultats de sentir, raonar, voler i reflexionar, comunes amb la resta d´individus.

La computerització transforma totes les esmentades facultats i condicions de decisió de la conducta moral. Hem de veure en quina mesura, i quins poden ser els principals inconvenients i avantatges en relació a allò que encara pensem com a “ètica”. O és que la noció d´aquesta també està en revisió? Ciència i filosofia tenen ara el repte de respondre i apuntar vers alguna direcció favorable als interessos de la humanitat.

Norbert Bilbeny i García és professor universitari, filòsof i escriptor, catedràtic d'ètica a la Universitat de Barcelona. Fou degà de la Facultat de Filosofia, essent escollit el 2011, des d'on va defensar un model d'internacionalització de la investigació catalana i la transferència societat-universitat. Actualment és director del Màster de Ciutadania i Drets Humans, i ho va ser del Màster d'Immigració i Educació intercultural. Entre la seva tasca pedagògica en destaquen estades de professor visitant a universitats estrangeres com ara la Universitat de Chicago, l'Institut Tecnològic i d'Estudis Superiors de Monterrey i la Universitat Loyola de Chicago. Va ser visiting scholar a Berkeley (Facultat de Dret), Harvard, Toronto, CNRS i Northwestern. Darrer llibre: La enfermedad del olvido. El mal del Alzheimer y la persona. http://www.norbertbilbeny.com

L´ètica ha consistit, fins ara, en una relació dels éssers humans entre ells i guiada per ells mateixos, en condicions bàsiques de presencialitat, reciprocitat, discursivitat i intersubjectivitat, tot suposant en cada individu la capacitat de guiar la seva acció, més enllà de l´instint i dels interessos primaris, per l´aprenentatge social de pautes de conducta moral i l´aplicació d´aquestes a partir d´un procés individual de decisió basat en l´ús personal de les facultats de sentir, raonar, voler i reflexionar, comunes amb la resta d´individus.

La computerització transforma totes les esmentades facultats i condicions de decisió de la conducta moral. Hem de veure en quina mesura, i quins poden ser els principals inconvenients i avantatges en relació a allò que encara pensem com a “ètica”. O és que la noció d´aquesta també està en revisió? Ciència i filosofia tenen ara el repte de respondre i apuntar vers alguna direcció favorable als interessos de la humanitat.

Norbert Bilbeny i García és professor universitari, filòsof i escriptor, catedràtic d'ètica a la Universitat de Barcelona. Fou degà de la Facultat de Filosofia, essent escollit el 2011, des d'on va defensar un model d'internacionalització de la investigació catalana i la transferència societat-universitat. Actualment és director del Màster de Ciutadania i Drets Humans, i ho va ser del Màster d'Immigració i Educació intercultural. Entre la seva tasca pedagògica en destaquen estades de professor visitant a universitats estrangeres com ara la Universitat de Chicago, l'Institut Tecnològic i d'Estudis Superiors de Monterrey i la Universitat Loyola de Chicago. Va ser visiting scholar a Berkeley (Facultat de Dret), Harvard, Toronto, CNRS i Northwestern. Darrer llibre: La enfermedad del olvido. El mal del Alzheimer y la persona. http://www.norbertbilbeny.com

 
15/Mar/2022
15/Mar/2022

Challenges and opportunities of artificial intelligence for empowering people living with chronic conditions
Challenges and opportunities of artificial intelligence for empowering people living with chronic conditions

Luís Fernández-Luque
Luís Fernández-Luque
 - 
Adhera Health Inc.
Adhera Health Inc.
 - 
EN
EN

The health domain has been an application area of artificial intelligence since the early years. Until very recently the use of artificial intelligence in the health domain has mostly focused on clinical data, including images, genetics, and clinical records. It has not been until recently that data-driven solutions in the health domain started to rely on patient-generated data coming from social networks, mobile and wearable devices.  These include applications for classification, health outcome predictions, conversational agents, and recommender systems. This lecture will focus on the human factors and applications of artificial intelligence for empowering people living with chronic conditions. We will discuss the main technical challenges and their human factors implications for the building of actionable and trustworthy solutions that support patients, caregivers, and their clinicians.

Luís Fernández-Luque: My research focus has been on the adaptation of mobile and web technologies for patient support and public health. My scientific contributions in mobile health, which includes both mobile and wearable devices, are among the most cited and pioneering in the field dating back to the year 2006. I have substantial contributions in the creation and validation of Artificial Intelligence applications based on mobile and wearable technologies, including technologies such as deep learning and health recommender systems. My career has been always focused on the crossroads between computer science and behavioral change. I have ample experience in combining human factors research with artificial intelligence, that know-how is of crucial importance for the successful completion of the two aims of the project. My focus on human factors and data-driven applications dates back to my Ph.D. dissertation which focused on trustworthiness aspects of information retrieval of patient education.

As Chief Scientific Officer at Adhera Health (Palo Alto, CA, USA), I oversee the implementation of our research roadmap for our digital therapeutics' platform. Our evidence-based platform combines mobile technologies with artificial intelligence (Recommender Systems) to provide personalized patient support designed to improve the physical and mental wellbeing of people living with chronic conditions. In addition, I am a senior member of the IEEE Engineering in Medicine and Biology Society and Vice-President of the International Medical Informatics Association. I have over 100 publications cited in Google Scholar (https://scholar.google.com/citations?hl=en&user=N9Pdr2IAAAAJ).  

The health domain has been an application area of artificial intelligence since the early years. Until very recently the use of artificial intelligence in the health domain has mostly focused on clinical data, including images, genetics, and clinical records. It has not been until recently that data-driven solutions in the health domain started to rely on patient-generated data coming from social networks, mobile and wearable devices.  These include applications for classification, health outcome predictions, conversational agents, and recommender systems. This lecture will focus on the human factors and applications of artificial intelligence for empowering people living with chronic conditions. We will discuss the main technical challenges and their human factors implications for the building of actionable and trustworthy solutions that support patients, caregivers, and their clinicians.

Luís Fernández-Luque: My research focus has been on the adaptation of mobile and web technologies for patient support and public health. My scientific contributions in mobile health, which includes both mobile and wearable devices, are among the most cited and pioneering in the field dating back to the year 2006. I have substantial contributions in the creation and validation of Artificial Intelligence applications based on mobile and wearable technologies, including technologies such as deep learning and health recommender systems. My career has been always focused on the crossroads between computer science and behavioral change. I have ample experience in combining human factors research with artificial intelligence, that know-how is of crucial importance for the successful completion of the two aims of the project. My focus on human factors and data-driven applications dates back to my Ph.D. dissertation which focused on trustworthiness aspects of information retrieval of patient education.

As Chief Scientific Officer at Adhera Health (Palo Alto, CA, USA), I oversee the implementation of our research roadmap for our digital therapeutics' platform. Our evidence-based platform combines mobile technologies with artificial intelligence (Recommender Systems) to provide personalized patient support designed to improve the physical and mental wellbeing of people living with chronic conditions. In addition, I am a senior member of the IEEE Engineering in Medicine and Biology Society and Vice-President of the International Medical Informatics Association. I have over 100 publications cited in Google Scholar (https://scholar.google.com/citations?hl=en&user=N9Pdr2IAAAAJ).  

 
08/Mar/2022
08/Mar/2022

Blockchains, DAOs, and fractal governance: a preliminary overview
Blockchains, DAOs, and fractal governance: a preliminary overview

Marta Poblet Balcell
Marta Poblet Balcell
 - 
RMIT University - Graduate School of Business and Law
RMIT University - Graduate School of Business and Law
 - 
EN
EN

In the last few years, blockchain technologies have fuelled the emergence of DAOs (Decentralised Autonomous Organisations) as socio-technical systems pursuing a variety of goals: decentralise finances, raise funds, create guilds, promote cultural and artistic initiatives, etc. Developments in this space have also gone hand in with innovations in governance technologies (on-chain and off-chain voting systems, legal smart contracts, coordination tools, etc.). This presentation will provide a preliminary overview of how DAOs have are deploying governance mechanisms aiming at progressive decentralization and autonomy, while also considering the limitations and challenges that these systems are grappling with as they claim their space in the Web 3.

Marta Poblet Balcell is a Professor at RMIT University’s Graduate School of Business and Law. She is one of the co-founders of the Institute of Law and Technology at the Autonomous University of Barcelona and a former researcher at ICREA (Catalonia). Marta holds a JSD in law (Stanford University 2002) and a Master in International Legal Studies (Stanford University 2000). Her research interests cut across many disciplines, including political science, law, technology and sociology. She is also interested in the connections between technology developments (AI, blockchain, human computer interaction) and different theories of democracy and citizenship. Her particular area of interest is in how technologies can provide outcomes for citizens in the areas of justice, security, privacy, disaster relief, or emergency management.

In the last few years, blockchain technologies have fuelled the emergence of DAOs (Decentralised Autonomous Organisations) as socio-technical systems pursuing a variety of goals: decentralise finances, raise funds, create guilds, promote cultural and artistic initiatives, etc. Developments in this space have also gone hand in with innovations in governance technologies (on-chain and off-chain voting systems, legal smart contracts, coordination tools, etc.). This presentation will provide a preliminary overview of how DAOs have are deploying governance mechanisms aiming at progressive decentralization and autonomy, while also considering the limitations and challenges that these systems are grappling with as they claim their space in the Web 3.

Marta Poblet Balcell is a Professor at RMIT University’s Graduate School of Business and Law. She is one of the co-founders of the Institute of Law and Technology at the Autonomous University of Barcelona and a former researcher at ICREA (Catalonia). Marta holds a JSD in law (Stanford University 2002) and a Master in International Legal Studies (Stanford University 2000). Her research interests cut across many disciplines, including political science, law, technology and sociology. She is also interested in the connections between technology developments (AI, blockchain, human computer interaction) and different theories of democracy and citizenship. Her particular area of interest is in how technologies can provide outcomes for citizens in the areas of justice, security, privacy, disaster relief, or emergency management.

 
01/Mar/2022
01/Mar/2022

Evolution as a principle for engineering photosynthesis
Evolution as a principle for engineering photosynthesis

Ivan Reyna-Llorens
Ivan Reyna-Llorens
 - 
Centre for Research in Agricultural Genomics (CRAG)
Centre for Research in Agricultural Genomics (CRAG)
 - 
EN
EN

As the world population continues to expand, it is predicted that crop yields will have to increase by 50% over the next 35 years. Traditional breeding programs cannot keep pace with this current population growth rate. One of the main determinants of crop yield is the capacity of the plant to harvest light and convert it into sugars through photosynthesis. Despite this, photosynthesis improvement is still underexploited for the purpose of increasing yield. Plants have evolved a wide variety of photosynthesis flavours, some of them more efficient than others. This provides an “evolutionary guide” for engineering some of these traits in target crops like rice. In this talk I will describe some of the strategies we use in improving photosynthesis and some of the problems we face where the interaction with researchers in Artificial intelligence could prove beneficial.

Ivan Reyna-Llorens received a Ph.D in Plant Sciences from the University of Cambridge in 2016 using evolution as a guide to improve agricultural traits. He then worked as a postdoctoral fellow in the same University looking to develop methods for studying plant genomes. In 2021 he started the synthetic biology and photosynthesis group as a Junior Group Leader at CRAG focusing on understanding how global re-arrangements of gene regulatory networks have shaped the evolution of photosynthesis in plants, more specifically the adaptation of the photosynthetic machinery to different light conditions.  

As the world population continues to expand, it is predicted that crop yields will have to increase by 50% over the next 35 years. Traditional breeding programs cannot keep pace with this current population growth rate. One of the main determinants of crop yield is the capacity of the plant to harvest light and convert it into sugars through photosynthesis. Despite this, photosynthesis improvement is still underexploited for the purpose of increasing yield. Plants have evolved a wide variety of photosynthesis flavours, some of them more efficient than others. This provides an “evolutionary guide” for engineering some of these traits in target crops like rice. In this talk I will describe some of the strategies we use in improving photosynthesis and some of the problems we face where the interaction with researchers in Artificial intelligence could prove beneficial.

Ivan Reyna-Llorens received a Ph.D in Plant Sciences from the University of Cambridge in 2016 using evolution as a guide to improve agricultural traits. He then worked as a postdoctoral fellow in the same University looking to develop methods for studying plant genomes. In 2021 he started the synthetic biology and photosynthesis group as a Junior Group Leader at CRAG focusing on understanding how global re-arrangements of gene regulatory networks have shaped the evolution of photosynthesis in plants, more specifically the adaptation of the photosynthetic machinery to different light conditions.  

 
22/Feb/2022
22/Feb/2022

From Transfer Learning to Continual Learning
From Transfer Learning to Continual Learning

Joost van de Weijer
Joost van de Weijer
 - 
Computer vision center
Computer vision center
 - 
EN
EN

One of the major assets of deep neural networks is that when trained on large data sets (source data), their knowledge can be transferred to small datasets (target data). Transfer learning for deep neural networks can be simply performed by finetuning the network on the new data. In this talk, I will introduce the research field of continual learning where the aim is to not only adapt to the target data but also keep the performance on the original source data. In addition, during adaptation to the target, the learner has no longer access to the source data.  This process can be repeated into a sequence of tasks that are learned one at a time. The aim for the learner is to perform well on all previous tasks at the end of the training process. The main challenge for continual learning is called catastrophic forgetting, where the learner suffers from a significant drop in performance on previous tasks. I will discuss a number of strategies to prevent catastrophic forgetting and will explain several methods developed in our group to address this problem. 

Joost van de Weijer is a Senior Scientist at the Computer Vision Center and leader of the Learning and Machine Perception (LAMP) group. He received his Ph.D. degree in 2005 from the University of Amsterdam. From 2005 to 2007, he was a Marie Curie Intra-European Fellow in the LEAR Team, INRIA Rhone-Alpes, France. From 2008 to 2012, he was a Ramon y Cajal Fellow at the Universidad Autonoma de Barcelona. He has served as an area chair for the main computer vision and machine learning conferences CVPR; ICCV; ECCV, NeurIPS. His main research interests include active learning, continual learning, transfer learning, domain adaptation, and generative models. 

 

One of the major assets of deep neural networks is that when trained on large data sets (source data), their knowledge can be transferred to small datasets (target data). Transfer learning for deep neural networks can be simply performed by finetuning the network on the new data. In this talk, I will introduce the research field of continual learning where the aim is to not only adapt to the target data but also keep the performance on the original source data. In addition, during adaptation to the target, the learner has no longer access to the source data.  This process can be repeated into a sequence of tasks that are learned one at a time. The aim for the learner is to perform well on all previous tasks at the end of the training process. The main challenge for continual learning is called catastrophic forgetting, where the learner suffers from a significant drop in performance on previous tasks. I will discuss a number of strategies to prevent catastrophic forgetting and will explain several methods developed in our group to address this problem. 

Joost van de Weijer is a Senior Scientist at the Computer Vision Center and leader of the Learning and Machine Perception (LAMP) group. He received his Ph.D. degree in 2005 from the University of Amsterdam. From 2005 to 2007, he was a Marie Curie Intra-European Fellow in the LEAR Team, INRIA Rhone-Alpes, France. From 2008 to 2012, he was a Ramon y Cajal Fellow at the Universidad Autonoma de Barcelona. He has served as an area chair for the main computer vision and machine learning conferences CVPR; ICCV; ECCV, NeurIPS. His main research interests include active learning, continual learning, transfer learning, domain adaptation, and generative models. 

 

 
01/Feb/2022
01/Feb/2022

Ciencia en la narrativa
Ciencia en la narrativa

Pedro Meseguer
Pedro Meseguer
 - 
IIIA
IIIA
 - 
SP
SP

La ciencia está cada vez más presente en nuestra sociedad. Y la ciudadanía demanda una información científica veraz y accesible, lejos de los lenguajes herméticos de los especialistas. En este contexto, la narrativa aparece como un magnífico medio para divulgar ciencia. Con ese punto de partida, en el seminario, describiremos cuatro formas comunes de introducir ciencia en la narrativa. Y en esa iniciativa, comentaremos otras tantas novelas nada trasnochadas, de la segunda mitad del siglo XX —incluyendo la espléndida Cien años de soledad— y del siglo XXI, que contienen elementos científicos significativos. Veremos el papel que la ciencia —bajo la pluma de escritores notables— realiza en cada una de esas obras. Y concluiremos con una especulación sobre la ciencia en la narrativa de las próximas décadas.

Pedro Meseguer es investigador del CSIC en el IIIA. En su trayectoria investigadora ha realizado numerosas actividades (publicaciones en congresos y revistas especializadas, direcciones de tesis doctorales, tareas editoriales, docencia), a las que se añade su interés reciente por la divulgación científica en donde ha promovido o desarrollado diversos eventos y materiales: un cómic sobre el sistema Watson para estudiantes de bachillerato, presentaciones en la Festa de la Ciència del Ajuntament de BCN, algunos vídeos divulgativos y colaboraciones en blogs científicos. Desde hace un tiempo se ha visto atraído por la narrativa, ha publicado una novela y ha colaborado con varios medios escritos. En la actualidad, realiza diversas actividades de evaluación para la Agencia Estatal de Investigación y efectúa tareas docentes en el grado de IA de la UAB (y también en el nuevo curso “Bojos per la IA”, que el IIIA coordina en el marco de la Fundació Catalunya La Pedrera), todo ello combinado con propuestas y labores de divulgación.

La ciencia está cada vez más presente en nuestra sociedad. Y la ciudadanía demanda una información científica veraz y accesible, lejos de los lenguajes herméticos de los especialistas. En este contexto, la narrativa aparece como un magnífico medio para divulgar ciencia. Con ese punto de partida, en el seminario, describiremos cuatro formas comunes de introducir ciencia en la narrativa. Y en esa iniciativa, comentaremos otras tantas novelas nada trasnochadas, de la segunda mitad del siglo XX —incluyendo la espléndida Cien años de soledad— y del siglo XXI, que contienen elementos científicos significativos. Veremos el papel que la ciencia —bajo la pluma de escritores notables— realiza en cada una de esas obras. Y concluiremos con una especulación sobre la ciencia en la narrativa de las próximas décadas.

Pedro Meseguer es investigador del CSIC en el IIIA. En su trayectoria investigadora ha realizado numerosas actividades (publicaciones en congresos y revistas especializadas, direcciones de tesis doctorales, tareas editoriales, docencia), a las que se añade su interés reciente por la divulgación científica en donde ha promovido o desarrollado diversos eventos y materiales: un cómic sobre el sistema Watson para estudiantes de bachillerato, presentaciones en la Festa de la Ciència del Ajuntament de BCN, algunos vídeos divulgativos y colaboraciones en blogs científicos. Desde hace un tiempo se ha visto atraído por la narrativa, ha publicado una novela y ha colaborado con varios medios escritos. En la actualidad, realiza diversas actividades de evaluación para la Agencia Estatal de Investigación y efectúa tareas docentes en el grado de IA de la UAB (y también en el nuevo curso “Bojos per la IA”, que el IIIA coordina en el marco de la Fundació Catalunya La Pedrera), todo ello combinado con propuestas y labores de divulgación.

 
25/Jan/2022
25/Jan/2022

The Mathematical Characterization of Brain States
The Mathematical Characterization of Brain States

Ignasi Cos
Ignasi Cos
 - 
Universitat de Barcelona
Universitat de Barcelona
 - 
EN
EN

How useful would it be to attain a formal, mathematical characterization of the neuro-dynamics of individual patients affected by stroke, Parkinson’s Disease or other neuro-degenerative disorders in a straightforward manner? Beyond the obvious clinical application, the answer to that question depends on attaining accurate models of the brain, on how general are their predictions, and on how adaptive to the clinical context and in particular to the single patient medical praxis. Current tools for brain state characterization have made a remarkable progress in the past ten years, yielding mathematical techniques, gradually amassing a huge amount of knowledge about brain structure and dynamics during resting state and performance of specific tasks. Pending further research to perfect them, these techniques are to emerge as promise both for a deeper, more formal understanding of brain function, and as a reliable tool for the clinical diagnose of neuro-degenerative disorders.

Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement. 

How useful would it be to attain a formal, mathematical characterization of the neuro-dynamics of individual patients affected by stroke, Parkinson’s Disease or other neuro-degenerative disorders in a straightforward manner? Beyond the obvious clinical application, the answer to that question depends on attaining accurate models of the brain, on how general are their predictions, and on how adaptive to the clinical context and in particular to the single patient medical praxis. Current tools for brain state characterization have made a remarkable progress in the past ten years, yielding mathematical techniques, gradually amassing a huge amount of knowledge about brain structure and dynamics during resting state and performance of specific tasks. Pending further research to perfect them, these techniques are to emerge as promise both for a deeper, more formal understanding of brain function, and as a reliable tool for the clinical diagnose of neuro-degenerative disorders.

Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement. 

 
18/Jan/2022
18/Jan/2022

PROTON-T seen as an experience of stakeholder-driven cooperative modeling
PROTON-T seen as an experience of stakeholder-driven cooperative modeling

Mario Paolucci
Mario Paolucci
 - 
IRPPS-CNR, Italy
IRPPS-CNR, Italy
 - 
EN
EN

We developed a model of recruitment to terrorism based on Social Structure Social Learning Theory (SSSL), Routine Activities Theory (RAT) and Situational Action Theory (SAT). Using real-world data, our experiments were enacted in a proto-typical European city borough. After introducing the model and its main results, we critically discuss the decisions taken from the point of view of calibration vs. arbitrary decisions, theory vs. mechanisms, and cost/benefit of modeling choices.

Mario Paolucci is a Senior Researcher with the Italian National Research Council (CNR). Mario received a degree in Physics from the Sapienza University of Rome, under the supervision of Antonio Degasperis, and a PhD from the University of Florence, carrying out research activity with Rosaria Conte and Cristiano Castelfranchi. Mario has been co-PI together with Giulia Andrighetto of the Laboratory of Agent-based Social Simulation (LABSS), located at ISTC. He has been scientific coordinator and PI of EC projects (eRep, FuturICT 2.0 (https://futurict2.eu/)). He is also author of about 100 scientific publications, among which a monograph on Reputation written with Rosaria Conte, and articles on peer-reviewed journals such as Advances in Complex Systems, Scientometrics, and the International Journal of Approximate Reasoning.

We developed a model of recruitment to terrorism based on Social Structure Social Learning Theory (SSSL), Routine Activities Theory (RAT) and Situational Action Theory (SAT). Using real-world data, our experiments were enacted in a proto-typical European city borough. After introducing the model and its main results, we critically discuss the decisions taken from the point of view of calibration vs. arbitrary decisions, theory vs. mechanisms, and cost/benefit of modeling choices.

Mario Paolucci is a Senior Researcher with the Italian National Research Council (CNR). Mario received a degree in Physics from the Sapienza University of Rome, under the supervision of Antonio Degasperis, and a PhD from the University of Florence, carrying out research activity with Rosaria Conte and Cristiano Castelfranchi. Mario has been co-PI together with Giulia Andrighetto of the Laboratory of Agent-based Social Simulation (LABSS), located at ISTC. He has been scientific coordinator and PI of EC projects (eRep, FuturICT 2.0 (https://futurict2.eu/)). He is also author of about 100 scientific publications, among which a monograph on Reputation written with Rosaria Conte, and articles on peer-reviewed journals such as Advances in Complex Systems, Scientometrics, and the International Journal of Approximate Reasoning.

 
14/Dec/2021
14/Dec/2021

Clausal forms for Vague information processing (ClaVa, a MCSA-IF project)
Clausal forms for Vague information processing (ClaVa, a MCSA-IF project)

Amanda Vidal
Amanda Vidal
 - 
IIIA
IIIA
 - 
EN
EN

In this seminar, we will describe the topic, objectives and previous works concerning  a MCSA project starting in October 2021 at the IIIA. The project is focused in  study clausal-form systems for real and rational-valued events, facing the questions of their general definition, usage and solvable problems (SAT and optimality) from the point of view of their complexity, algorithmic design and applicability. The approach will be based on the study from a formal point of view of restricted classes of formulas in substructural and many-valued logics that have, ideally, good expressive power but are simpler than the whole logical systems, and the analysis of their computational behavior. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101027914.

Amanda Vidal is graduated both in Mathematics and in Computer Science in 2010 at Autonomous University of Madrid, and obtained my Master Degree  (2012) and PhD in Pure and Applied Logics in the UB and the IIIA-CSIC (under the supervision of F. Bou, F.Esteva, and L.Godo) in 2015. Afterwards, she has spent 4 years with different postdoctoral fellowships at the Institute of Computer Science of the Czech Academy of Sciences. Since October 2021, she is a MCSA fellow under the supervision of F. Manya  back at the IIIA-CSIC.

In this seminar, we will describe the topic, objectives and previous works concerning  a MCSA project starting in October 2021 at the IIIA. The project is focused in  study clausal-form systems for real and rational-valued events, facing the questions of their general definition, usage and solvable problems (SAT and optimality) from the point of view of their complexity, algorithmic design and applicability. The approach will be based on the study from a formal point of view of restricted classes of formulas in substructural and many-valued logics that have, ideally, good expressive power but are simpler than the whole logical systems, and the analysis of their computational behavior. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101027914.

Amanda Vidal is graduated both in Mathematics and in Computer Science in 2010 at Autonomous University of Madrid, and obtained my Master Degree  (2012) and PhD in Pure and Applied Logics in the UB and the IIIA-CSIC (under the supervision of F. Bou, F.Esteva, and L.Godo) in 2015. Afterwards, she has spent 4 years with different postdoctoral fellowships at the Institute of Computer Science of the Czech Academy of Sciences. Since October 2021, she is a MCSA fellow under the supervision of F. Manya  back at the IIIA-CSIC.

 
30/Nov/2021
30/Nov/2021

Time, Technology, and Globalization. A study of the role of technology in processes of modernization and globalization using the Press, Big Data, and Computational Research Methodologies (GLOTECH)
Time, Technology, and Globalization. A study of the role of technology in processes of modernization and globalization using the Press, Big Data, and Computational Research Methodologies (GLOTECH)

Elena Fernandez
Elena Fernandez
 - 
Department of Computational Linguistics, University of Zurich
Department of Computational Linguistics, University of Zurich
 - 
EN
EN

The EU-funded GLOTECH project comprises a study of the role of technology in processes of modernisation and globalisation using the press, big data and computational research methods. It will explore the role of technology as a factor of time standardisations in Western industrialised societies as well as a booster of cultural homogenisation, and as a consequence, an agent of modernisation and globalisation. The analysis will focus on the press in European countries, the United Kingdom, and the United States. The methodology will include different computational research methods, contributing to significant advances in digital humanities and computational social sciences. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (MSC) grant agreement No 101024996

Elena Fernandez is a Marie Curie Post-Doctoral researcher based at the Department of Computational Linguistics, University of Zurich, and the Principal Investigator of GLOTECH. From 2019-2021, she was a Eurotech Post-Doctoral Fellow, and the Principal Investigator of PRESSTECH. She completed a PhD in Hispanic Languages and Literatures at the University of California, Berkeley (2019), a M.A. in Spanish Studies at the University of Virginia (2013), and a B.A. in English Philology at the University of Salamanca (2011). Her research profile that lies at the intersection between Computational Social Science, Digital Humanities, and Media and Communication Studies.

The EU-funded GLOTECH project comprises a study of the role of technology in processes of modernisation and globalisation using the press, big data and computational research methods. It will explore the role of technology as a factor of time standardisations in Western industrialised societies as well as a booster of cultural homogenisation, and as a consequence, an agent of modernisation and globalisation. The analysis will focus on the press in European countries, the United Kingdom, and the United States. The methodology will include different computational research methods, contributing to significant advances in digital humanities and computational social sciences. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (MSC) grant agreement No 101024996

Elena Fernandez is a Marie Curie Post-Doctoral researcher based at the Department of Computational Linguistics, University of Zurich, and the Principal Investigator of GLOTECH. From 2019-2021, she was a Eurotech Post-Doctoral Fellow, and the Principal Investigator of PRESSTECH. She completed a PhD in Hispanic Languages and Literatures at the University of California, Berkeley (2019), a M.A. in Spanish Studies at the University of Virginia (2013), and a B.A. in English Philology at the University of Salamanca (2011). Her research profile that lies at the intersection between Computational Social Science, Digital Humanities, and Media and Communication Studies.

 
23/Nov/2021
23/Nov/2021

Estimating Context-Specific Values from Natural Language
Estimating Context-Specific Values from Natural Language

Enrico Liscio
Enrico Liscio
 - 
TU Delft
TU Delft
 - 
EN
EN

Values are the abstract motivations that justify opinions and actions. The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, the context-specific nature of values must be considered to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. Further, in practical applications (e.g., to conduct meaningful conversations or to identify online trends), artificial agents should be able to understand values on the fly from natural language.

We outline an approach for estimating context-specific values from text. At first, the values relevant to a context must be identified. To this end, we propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Then, we examine the effectiveness of NLP models in classifying values in text. As context influences how we express values in natural language, we investigate the extent to which the learned value rhetoric can be transferred across contexts. Subsequently, we propose explainability techniques to inspect whether value classifiers have learned the context-specific connotations of values. Finally, we combine the steps above into a single method for swiftly estimating context-specific values from users.

Enrico Liscio (https://enricoliscio.github.io) is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. He obtained cum laude MSc. in Systems and Control from TU Delft (the Netherlands, 2017) and cum laude BSc. in Automation Engineering from the University of Bologna (Italy, 2015). Between his MSc. studies and the current position, he worked for 2.5 years as deep learning developer and technical project lead at Fizyr (the Netherlands).

Values are the abstract motivations that justify opinions and actions. The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, the context-specific nature of values must be considered to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. Further, in practical applications (e.g., to conduct meaningful conversations or to identify online trends), artificial agents should be able to understand values on the fly from natural language.

We outline an approach for estimating context-specific values from text. At first, the values relevant to a context must be identified. To this end, we propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Then, we examine the effectiveness of NLP models in classifying values in text. As context influences how we express values in natural language, we investigate the extent to which the learned value rhetoric can be transferred across contexts. Subsequently, we propose explainability techniques to inspect whether value classifiers have learned the context-specific connotations of values. Finally, we combine the steps above into a single method for swiftly estimating context-specific values from users.

Enrico Liscio (https://enricoliscio.github.io) is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. He obtained cum laude MSc. in Systems and Control from TU Delft (the Netherlands, 2017) and cum laude BSc. in Automation Engineering from the University of Bologna (Italy, 2015). Between his MSc. studies and the current position, he worked for 2.5 years as deep learning developer and technical project lead at Fizyr (the Netherlands).

 
16/Nov/2021
16/Nov/2021

Agent and Multi-agent Techniques for Resource Allocation and Scheduling in Earth Observation Satellite Constellations
Agent and Multi-agent Techniques for Resource Allocation and Scheduling in Earth Observation Satellite Constellations

Gauthier Picard
Gauthier Picard
 - 
ONERA/DTIS, Université de Toulouse
ONERA/DTIS, Université de Toulouse
 - 
EN
EN

In this presentation, we discuss the use of Agent and Multi-agent techniques in Space systems. We first identify some AI research challenges related to satellite constellations, especially concerning Earth Observation applications. These challenges range from constellation design to on-board in-space decision, and raise opportunities for investigation efforts in Multi-agent based Simulation, to Distributed Problem Solving, by the way of Machine Learning and Game Theory. We then focus on case studies related to constellation resource allocation and scheduling. The first case study concerns the allocation of exclusive orbit slots to privileged constellation users. In this problem, the constellation operator aims at allocating the resources (orbit slots) as optimally and fairly as possible, prior to any scheduling, only using some simple requirements from clients. This problem is long-term, over horizons of several months. We explore here the use of utilitarian and Leximin-optimal techniques. The second case study investigates how distributed and coordinated decision techniques can be used as to schedule observation tasks over such exclusive orbit portions, so that exclusive users do not disclose their own agenda. This problem is short-term, over horizons of few hours. Here, we make use of distributed constraint optimization and sequential auctions to distribute decisions over the set of exclusive users.

Gauthier Picard received a Ph.D. in Computer Science from the University of Toulouse in 2004, and the Habilitation degree in Computer Science from the University of Saint-Etienne in 2014. He was Associate Professor and then Full professor in Computer Science at MINES Saint-Etienne, before reaching a Senior Researcher position at ONERA, The French Aerospace Lab. His research focuses on cooperation and adaptation in multi-agent systems and distributed optimization with applications to aircraft design, ambient intelligence, intelligent transport and space operations.

In this presentation, we discuss the use of Agent and Multi-agent techniques in Space systems. We first identify some AI research challenges related to satellite constellations, especially concerning Earth Observation applications. These challenges range from constellation design to on-board in-space decision, and raise opportunities for investigation efforts in Multi-agent based Simulation, to Distributed Problem Solving, by the way of Machine Learning and Game Theory. We then focus on case studies related to constellation resource allocation and scheduling. The first case study concerns the allocation of exclusive orbit slots to privileged constellation users. In this problem, the constellation operator aims at allocating the resources (orbit slots) as optimally and fairly as possible, prior to any scheduling, only using some simple requirements from clients. This problem is long-term, over horizons of several months. We explore here the use of utilitarian and Leximin-optimal techniques. The second case study investigates how distributed and coordinated decision techniques can be used as to schedule observation tasks over such exclusive orbit portions, so that exclusive users do not disclose their own agenda. This problem is short-term, over horizons of few hours. Here, we make use of distributed constraint optimization and sequential auctions to distribute decisions over the set of exclusive users.

Gauthier Picard received a Ph.D. in Computer Science from the University of Toulouse in 2004, and the Habilitation degree in Computer Science from the University of Saint-Etienne in 2014. He was Associate Professor and then Full professor in Computer Science at MINES Saint-Etienne, before reaching a Senior Researcher position at ONERA, The French Aerospace Lab. His research focuses on cooperation and adaptation in multi-agent systems and distributed optimization with applications to aircraft design, ambient intelligence, intelligent transport and space operations.

 
09/Nov/2021
09/Nov/2021

Social media analysis and crowd-sourcing for disaster management
Social media analysis and crowd-sourcing for disaster management

José Luis Fernández Marquez
José Luis Fernández Marquez
 - 
University of Geneva
University of Geneva
 - 
EN
EN

Increase in access to mobile phone devices and social media networks has changed the way people report and respond to disasters. Community-driven initiatives such as Stand By Task Force (SBTF) or GISCorps have shown great potential by crowdsourcing the acquisition, analysis, and geolocation of social media data for disaster responders. To make social media information suitable for emergency responders, these initiatives face two main challenges: (1) Most of social media content such as photos and videos are not geolocated, thus preventing the information to be used by emergency responders,  and (2) they lack tools to manage volunteers' contributions and aggregate them in order to ensure high quality and reliable results.
 This seminar illustrates Crowd4EMS a crowdsourcing platform developed under the EU project E2mC: Evolution of Emergency Copernicus services. Crowd4EMS combines automatic methods for gathering information from social media and crowdsourcing techniques in order to manage, aggregate volunteers' contributions, and ensure reliable for emergency responders in disaster management.

Dr. Jose Luis Fernandez-Marquez (Male) is Senior Lecturer at the University of Geneva (UNIGE), and  head of the Geneva-Tsinghua Initiative Accelerator.  He has a computer science background, PhD in collective artificial intelligence, and wide experience in Citizen Science.  In 2011 he joint UNIGE after his PhD defence at the Artificial Intelligence Research Institute (IIIA-CSIC).  In 2014, he formally joint the Citizen Cyberlab a partnership between UNIGE, CERN, and the United Nation for Training and Research (UNITAR) aiming at encouraging citizens and scientists  to collaborate in new ways to solve big challenges. Since 2019, he is technical coordinator of the Crowd4SDG EU project which focuses on demonstrating the potential of Citizen Science for monitoring and achieving the SDGs.

His current research focus on citizen science data quality analysis and methodologies to make citizen science data  suitable for decision/policy makers. 

Increase in access to mobile phone devices and social media networks has changed the way people report and respond to disasters. Community-driven initiatives such as Stand By Task Force (SBTF) or GISCorps have shown great potential by crowdsourcing the acquisition, analysis, and geolocation of social media data for disaster responders. To make social media information suitable for emergency responders, these initiatives face two main challenges: (1) Most of social media content such as photos and videos are not geolocated, thus preventing the information to be used by emergency responders,  and (2) they lack tools to manage volunteers' contributions and aggregate them in order to ensure high quality and reliable results.
 This seminar illustrates Crowd4EMS a crowdsourcing platform developed under the EU project E2mC: Evolution of Emergency Copernicus services. Crowd4EMS combines automatic methods for gathering information from social media and crowdsourcing techniques in order to manage, aggregate volunteers' contributions, and ensure reliable for emergency responders in disaster management.

Dr. Jose Luis Fernandez-Marquez (Male) is Senior Lecturer at the University of Geneva (UNIGE), and  head of the Geneva-Tsinghua Initiative Accelerator.  He has a computer science background, PhD in collective artificial intelligence, and wide experience in Citizen Science.  In 2011 he joint UNIGE after his PhD defence at the Artificial Intelligence Research Institute (IIIA-CSIC).  In 2014, he formally joint the Citizen Cyberlab a partnership between UNIGE, CERN, and the United Nation for Training and Research (UNITAR) aiming at encouraging citizens and scientists  to collaborate in new ways to solve big challenges. Since 2019, he is technical coordinator of the Crowd4SDG EU project which focuses on demonstrating the potential of Citizen Science for monitoring and achieving the SDGs.

His current research focus on citizen science data quality analysis and methodologies to make citizen science data  suitable for decision/policy makers.