CA | ES | EN
Seminar

Computing Shapley Values under Distributional Uncertainty
Computing Shapley Values under Distributional Uncertainty

04/Apr/2024
04/Apr/2024

Speaker:

Santiago Cifuentes
Santiago Cifuentes

Institution:

University of Buenos Aires
University of Buenos Aires

Language :

EN
EN

Type :

Hybrid
Hybrid

Description:

TBA

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).

One of the main problems in Explaninable AI (XAI) consists in explaining, given a model M and an entity e, the prediction M(e) in such a way that the user can understand and interpret the algorithm's result. One of the most popular proposals to create such explanations consists in ranking the features of e in terms of their relevance towards the prediction from M: the better ranked features will be those more influential to the final result of the model. Many of these feature attribution techniques are conceptually based on the Theory of Cooperative Games, and this holds specially for the SHAP-score, inspired on the Shapley Values. The exact computation of the latter ones is extremally challenging, but it was proven that for certain families of simpler models (such as decision trees) they can be found in polynomial time. Nonetheless, there still exists a caveat for practical applications: their computation relies on knowledge of the underlying distribution of the feature space, which is usually unknown. Even after (boldly) assuming feature independence and sampling the distribution from the training data set, there will still be some uncertainty related to statistical deviations and noise. In this talk, I'll present different problems related to reasoning around this uncertainty, alongside (hardness) complexity results.

Santiago Cifuentes holds a Licenciate degree in Computer Science from the University of Buenos Aires, and is currently completing his PhD in Computer Science at the same institution under the advisor of Dr. Santiago Figueira and Dr. Ariel Bendersky. Over the past three years, he has been actively engaged in research projects related to Knowledge Representation and Reasoning, mainly under the presence of uncertainty and in the context of graph database models. He has already made contributions to this field with publications in JAIR, IJAR and AMW. During the last year he has started to inquire into the field of Explainable AI, and is specially interested in studying the tractability frontier for different explanability proposals. 

His PhD is related to the foundations of Quantum Computing. His main research interest in this area is quantum complexity theory, and is currently working as an Intern in the Quantum Algorithms Research Team at the Technologhy Innovation Institute of Abu Dhabi. He is also interested in understanding the relation between Quantum Physics and randomness from a computer scientist point of view (i.e. through martingale and Kolmogorov complexity notions).