On April 7 I will give an invated talk during the REFSQ 2025 Workshop NLP4RE.
Title: Evaluating Linguistic Abilities of Neural Language Models
Abstract: The field of Natural Language Processing (NLP) has witnessed remarkable advancements in recent years, driven largely by the shift from traditional approaches to state-of-the-art neural network-based algorithms. Among these, Large-scale Language Models (LLMs) have shown remarkable performance across a wide range of tasks and in generating coherent and contextually relevant texts. This improvement, however, comes at the cost of interpretability, since deep neural models offer little transparency about their inner workings and their abilities. In response, a growing body of research is dedicated to evaluating and interpreting LLMs, aiming to shed light on the inner workings and linguistic abilities encoded by these systems. This talk explores recent studies that shed light on these abilities, highlighting how such insights enhance our understanding of model behaviour across various applications.
Location
Polytechnic University of Catalonia, North Campus, Barcelona
Full-time researcher (RTD) in Natural Language Processing