NL4XAINL4XAI

NL4XAINL4XAI

 : 

Interactive Natural Language Technology for Explainable Artificial IntelligenceInteractive Natural Language Technology for Explainable Artificial Intelligence

A Project coordinated by IIIA.

Principal investigator:

Carles SierraCarles Sierra

Team members:

Collaborating organisations:

UNIVERSIDAD DE SANTIAGO DE COMPOSTELA (USC)

THE UNIVERSITY COURT OF THE UNIVERSITY OF ABERDEEN (UNIABDN),

TECHNISCHE UNIVERSITEIT DELFT (TU Delft)

CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS)

UNIVERSITA TA MALTA (UOM)

UNIVERSITEIT UTRECHT (UU)

INSTYTUT FILOZOFII I SOCJOLOGII POLSKIEJ AKADEMII NAUK (IFIS PAN)

INDRA SOLUCIONES TECNOLOGIAS DE LA INFORMACION SL (INDRA),

UNIVERSITEIT TWENTE (UTWENTE)

UNIVERSIDAD DE SANTIAGO DE COMPOSTELA (USC)

THE UNIVERSITY COURT OF THE UNIVERSITY OF ABERDEEN (UNIABDN),

TECHNISCHE UNIVERSITEIT DELFT (TU Delft)

CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS)

UNIVERSITA TA MALTA (UOM)

UNIVERSITEIT UTRECHT (UU)

INSTYTUT FILOZOFII I SOCJOLOGII POLSKIEJ AKADEMII NAUK (IFIS PAN)

INDRA SOLUCIONES TECNOLOGIAS DE LA INFORMACION SL (INDRA),

UNIVERSITEIT TWENTE (UTWENTE)

Funding entity:

MSCA-ITN-ETN – European Training NetworksMSCA-ITN-ETN – European Training Networks

Funding call:

H2020-MSCA-ITN-2019H2020-MSCA-ITN-2019

Project #:

860621860621

Funding amount:

2.843.888,00€2.843.888,00€

Duration:

2019-10-012019-10-01

 -

2023-09-302023-09-30

According to Polanyi’s paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision.

In the NL4XAI project we will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, the 11 Early Stage Researchers (ESRs) to be trained in the NL4XAI project are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents.

According to Polanyi’s paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision.

In the NL4XAI project we will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, the 11 Early Stage Researchers (ESRs) to be trained in the NL4XAI project are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents.

No publications yet