Download

Abstract

In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.


Citation
@article{probingLinguisticKnowledge_Miaschi2022,
  author               = "Alessio Miaschi and Gabriele Sarti and Dominique Brunato and Felice Dell'Orletta and Giulia Venturi",
  title                = "Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties",
  publisher            = "Academia University Press",
  year                 = 2022,
  journal              = "Italian Journal of Computational Linguistics (IJCoL)",
  volume               = "vol. 8",
  number               = "vol. 8, n. 1",
  pages                = "25-44",
}