A new NL4XAI event involving CiTIUS gathers this week in France numerous experts from academia and big tech companies

Next 19 April, CiTIUS researchers José María Alonso and Ettore Mariotti will participate as speakers of a spring school organized in the framework of the European research project NL4XAI (Natural Language Technologies for Explainable Artificial Intelligence). Along this event, hosted in Nancy (France), scientists from six different countries and big tech companies -such as Google and Orange- will combine their knowledge in using natural language to generate explanations about the decisions made by AI systems, in order to make them understandable to non-expert users.

According to EU legislation, humans have a right to explanations of decisions affecting them, when artificial intelligence (AI) based systems make such decisions. However, AI-based systems (which mainly learn automatically from data), often lack the required transparency. Scientists and industry will therefore organize a Spring School as a part of the NL4XAI research project, an initiative funded by the Horizon 2020 research and innovation programme in the framework of the European Union’s investment in Explainable Artificial Intelligence.

Emerging as the first European Training Network (ETN) on Natural Language (NL) and Explainable AI (XAI), the project is a joint academic-industry research collaboration that brings together 19 beneficiaries and partners from 6 different European countries, including France, Malta, Poland, Spain, The Netherlands, and United Kingdom. The main goal of this 4-year initiative is to train early-stage researchers (ESRs, in general, Ph.D. students) who face the challenge of designing and implementing a new generation of self-explaining AI systems. The NL4XAI network also facilitates sharing state-of-the-art knowledge on the subject of Explainable AI.

Tackling inconsistencies

Natural Language Generation (NLG) is an artificial intelligence technique capable of generating text from various types of input, such as numerical data, traces, text, or knowledge bases, and therefore providing a potentially powerful tool to explain the reasoning of AI models. However, the models used in these techniques are not error-free: before they can be used to explain the reasoning of AI models, they need to be further improved. In particular, it must be ensured that the text they generate is faithful to the input, that it covers all information present in the input, and conversely, that it does not contain content that is irrelevant or even contradictory to the input. 

Along with this Spring School, the NL4XAI members will focus on explainable methods for Natural Language Generation (NLG), which aim to detect errors in the output of NLG models in order to explain the sources of these errors. Specifically, they will delve into a series of lectures and a hands-on workshop, being exposed to an in-depth analysis of the types of errors that can be produced by these models. The lectures will provide them with the background necessary to understand neural NLG models and will be given by both industry practitioners (ORANGE Lannion, France; and GOOGLE, London, UK), and senior NL4XAI researchers. In addition, the workshop will allow the ESRs to study the errors made by existing NLG models and reflect on both the implications of such errors and ways of remedying them.

Fifth pooling of the network

The NL4XAI has already hosted four training events, including events focused on Ethics/Law in Natural Language Processing, more broad natural language technology, and Interaction/Interfaces. In addition to these official training events, the ESRs have been learning from fellow scientists at different universities and companies. In 2021, we saw fruitful exchanges with the University of Malta, Maastricht University, Warsaw University of Technology (WUT), IIIA-CSIC, and CITIUS-USC.  With fewer COVID-19 related travel restrictions in 2022, the ESRs will also be able to visit Utrecht University, the University of Twente, TU Delft, ARG-tech, as well as the companies Orange, Wiznoze, and Info Support.

H2020 Training Network

NL4XAI is funded by the Horizon 2020 research and innovation programme, through a Marie Skłodowska-Curie grant, in the framework of the European Union’s bet for Explainable Artificial Intelligence. The network is coordinated by the research team at the Research Centre in Intelligent Technology of the University of Santiago de Compostela (CiTIUS-USC), headed by Senén Barro. NL4XAI is a joint academic-industry research network, that brings together 19 beneficiaries and partners from six different European countries (France, Malta, Poland, Spain, The Netherlands, and UK). The partners correspond to two national research institutions (IIIA-CSIC, CNRS), ten universities (University of Aberdeen, University of Dundee, L-Universitá ta’ Malta, Delft University of Technology, Utrecht University, University of Twente, Warsaw University of Technology, Université de Lorraine, Universidade de Santiago de Compostela, Universitat Autonòma de Barcelona and Maastricht University and six private companies (Indra, Accenture, Orange, Wizenoze, Arria, InfoSupport).

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621

This communication reflects only the author's view and that the Agency and the Commission are not responsible for any use that may be made of the information it contains.