CA | ES | EN
Seminar

Little of That Human Touch: Achieving Human-Centric Explainable AI via Argumentation
Little of That Human Touch: Achieving Human-Centric Explainable AI via Argumentation

08/May/2025
08/May/2025

Speaker:

Antonio Rago
Antonio Rago

Institution:

Imperial College London
Imperial College London

Language :

EN
EN

Type :

Hybrid
Hybrid

Description :

As data-driven AI models achieve unprecedented feats across previously unthinkable tasks, the diminishing levels of interpretability of their increasingly complex architectures can often be sidelined in place of performance. If we are to comprehend and trust these AI models as they advance, it is clear that symbolic methods, given their unparalleled strengths in knowledge representation and reasoning, can play an important role in explaining AI models. In this talk, I discuss some of the ways in which one branch of such methods, computational argumentation, given its human-like nature, can be used to tackle this problem. I first outline a general paradigm for this area of explainable AI, before detailing a prominent methodology therein, which we have pioneered. I then illustrate how this approach has been put into practice with diverse AI models and types of explanations, before looking ahead to challenges, future work and the outlook in this field.

Antonio Rago is a postdoctoral researcher at Imperial College London, currently focused on XAI but with a number of works on computational logic and argumentation. Within XAI, his work has mostly concerned the use of techniques from symbolic AI to define, develop and evaluate explanations of black box models, including various forms of neural networks. These works have ranged from the interactivity of explanations to providing formal robustness guarantees, and applications of his work include recommender systems, opinion polling in e-democracy, judgmental forecasting and Formula 1 race strategy. More details about Antonio and his research can be found at https:/www.doc.ic.ac.uk/~afr114/

As data-driven AI models achieve unprecedented feats across previously unthinkable tasks, the diminishing levels of interpretability of their increasingly complex architectures can often be sidelined in place of performance. If we are to comprehend and trust these AI models as they advance, it is clear that symbolic methods, given their unparalleled strengths in knowledge representation and reasoning, can play an important role in explaining AI models. In this talk, I discuss some of the ways in which one branch of such methods, computational argumentation, given its human-like nature, can be used to tackle this problem. I first outline a general paradigm for this area of explainable AI, before detailing a prominent methodology therein, which we have pioneered. I then illustrate how this approach has been put into practice with diverse AI models and types of explanations, before looking ahead to challenges, future work and the outlook in this field.

Antonio Rago is a postdoctoral researcher at Imperial College London, currently focused on XAI but with a number of works on computational logic and argumentation. Within XAI, his work has mostly concerned the use of techniques from symbolic AI to define, develop and evaluate explanations of black box models, including various forms of neural networks. These works have ranged from the interactivity of explanations to providing formal robustness guarantees, and applications of his work include recommender systems, opinion polling in e-democracy, judgmental forecasting and Formula 1 race strategy. More details about Antonio and his research can be found at https:/www.doc.ic.ac.uk/~afr114/