A Project coordinated by IIIA.
Web page:
Principal investigator:
Collaborating organisations:
UNIVERSIDAD DE SANTIAGO DE COMPOSTELA (USC)
THE UNIVERSITY COURT OF THE UNIVERSITY OF ABERDEEN (UNIABDN),
TECHNISCHE UNIVERSITEIT DELFT (TU Delft)
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS)
UNIVE...
Funding entity:
Funding call:
Project #:
Total funding amount:
IIIA funding amount:
Duration:
Extension date:
According to Polanyi’s paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision.
In the NL4XAI project we will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, the 11 Early Stage Researchers (ESRs) to be trained in the NL4XAI project are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents.