NL4XAI
NL4XAI

NL4XAI
NL4XAI
 : 
Interactive Natural Language Technology for Explainable Artificial Intelligence
Interactive Natural Language Technology for Explainable Artificial Intelligence

A Project coordinated by IIIA.

Web page:

Principal investigator: 

Collaborating organisations:

UNIVERSIDAD DE SANTIAGO DE COMPOSTELA (USC)

THE UNIVERSITY COURT OF THE UNIVERSITY OF ABERDEEN (UNIABDN),

TECHNISCHE UNIVERSITEIT DELFT (TU Delft)

CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS)

UNIVE...

[see more]

UNIVERSIDAD DE SANTIAGO DE COMPOSTELA (USC)

THE UNIVERSITY COURT OF THE UNIVERSITY OF ABERDEEN (UNIABDN),

TECHNISCHE UNIVERSITEIT DELFT (TU Delft)

CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS)

UNIVERSITA TA MALTA (UOM)

UNIVERSITEIT UTRECHT (UU)

INSTYTUT FILOZOFII I SOCJOLOGII POLSKIEJ AKADEMII NAUK (IFIS PAN)

INDRA SOLUCIONES TECNOLOGIAS DE LA INFORMACION SL (INDRA),

UNIVERSITEIT TWENTE (UTWENTE)

Funding entity:

MSCA-ITN-ETN – European Training Networks
MSCA-ITN-ETN – European Training Networks

Funding call:

H2020-MSCA-ITN-2019
H2020-MSCA-ITN-2019

Project #:

860621
860621

Total funding amount:

2.843.888,00€
2.843.888,00€

IIIA funding amount:

250.904,88€
250.904,88€

Duration:

01/Oct/2019
01/Oct/2019
30/Sep/2024
30/Sep/2024

Extension date:

According to Polanyi’s paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision.

In the NL4XAI project we will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, the 11 Early Stage Researchers (ESRs) to be trained in the NL4XAI project are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents.

According to Polanyi’s paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision.

In the NL4XAI project we will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, the 11 Early Stage Researchers (ESRs) to be trained in the NL4XAI project are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents.

2023
Eduardo Calò,  & Jordi Levy (2023). General Boolean Formula Minimization with QBF Solvers. Ismael Sanz, Raquel Ros, & Jordi Nin (Eds.), Artificial Intelligence Research and Development - Proceedings of the 25th International Conference of the Catalan Association for Artificial Intelligence, CCIA 2023, Món Sant Benet, Spain, 25-27 October 2023 (pp. 347--358). {IOS}Press. https://doi.org/10.3233/FAIA230705. [BibTeX]  [PDF]
Eduardo Calò,  Jordi Levy,  Albert Gatt,  & Kees Deemter (2023). Is Shortest Always Best? The Role of Brevity in Logic-to-Text Generation. Alexis Palmer, & José Camacho-Collados (Eds.), Proceedings of the The 12th Joint Conference on Lexical and Computational Semantics, *SEM@ACL 2023, Toronto, Canada, July 13-14, 2023 (pp. 180--192). Association for Computational Linguistics. https://doi.org/10.18653/V1/2023.STARSEM-1.17. [BibTeX]  [PDF]
Eduardo Calò
PhD Student
Jordi Levy
Tenured Scientist
Phone Ext. 431860

Ramon Lopez de Mantaras
Adjunct Professor Ad Honorem
Carles Sierra
Research Professor
Phone Ext. 431801