Source Themes

Is Neural Language Model Perplexity Related to Readability?

This paper explores the relationship between Neural Language Model (NLM) perplexity and sentence readability. Starting from the evidence that NLMs implicitly acquire sophisticated linguistic knowledge from a huge amount of training data, our goal is …

Italian Transformers Under the Linguistic Lens

In this paper we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate whether and how using different architectures of probing …

PRELEARN @ EVALITA 2020: Overview of the Prerequisite Relation Learning Task for Italian

The Prerequisite Relation Learning (PRELEARN) task is the EVALITA 2020 shared task on concept prerequisite learning, which consists of classifying prerequisite relations between pairs of concepts distinguishing between prerequisite pairs and …

ATE_ABSITA @ EVALITA2020: Overview of the Aspect Term Extraction and Aspect-based Sentiment Analysis Task

Over the last years, the rise of novel sentiment analysis techniques to assess aspect-based opinions on product reviews has become a key component for providing valuable insights to both consumers and businesses. To this extent, we propose ATE …

Linguistic Profiling of a Neural Language Model

In this paper we investigate the linguistic knowledge learned by a Neural Language Model (NLM) before and after a fine-tuning process and how this knowledge affects its predictions during several classification problems. We use a wide set of probing …

Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation

In this paper we present a comparison between the linguistic knowledge encoded in the internal representations of a contextual Language Model (BERT) and a contextual-independent one (Word2vec). We use a wide set of probing tasks, each of which …

Tracking the Evolution of Written Language Competence in L2 Spanish Learners

In this paper we present an NLP-based approach for tracking the evolution of written language competence in L2 Spanish learners using a wide range of linguistic features automatically extracted from students’ written productions. Beyond reporting …

Prerequisite or Not Prerequisite? That's the problem! An NLP-based Approach for Concept Prerequisite Learning

This paper presents a method for prerequisite learning classification between educational concepts. The proposed system was developed by adapting a classification algorithm designed for sequencing Learning Objects to the task of ordering concepts …

Linguistically-Driven Strategy for Concept Prerequisites Learning on Italian

We present a new concept prerequisite learning method for Learning Object (LO) ordering that exploits only linguistic features extracted from textual educational resources. The method was tested in a cross- and in- domain scenario both for Italian …

Trattamento Automatico della Lingua per la creazione di percorsi didattici personalizzati

Il contributo illustra le attività portate avanti dal Laboratorio ItaliaNLP Lab nel contesto dell’educazione, mostrando come strumenti di Trattamento Automatico della Lingua (TAL) per la profilazione linguistica del testo e l’accesso al contenuto …