Download
Abstract
This paper explores the relationship between Neural Language Model (NLM) perplexity and sentence readability. Starting from the evidence that NLMs implicitly acquire sophisticated linguistic knowledge from a huge amount of training data, our goal is to investigate whether perplexity is affected by linguistic features used to automatically assess sentence readability and if there is a correlation between the two metrics. Our findings suggest that this correlation is actually quite weak and the two metrics are affected by different linguistic phenomena.
Citation
@inproceedings{miaschi2020neural,
title={Is Neural Language Model Perplexity Related to Readability?},
author={Miaschi, Alessio and Alzetta, Chiara and Brunato, Dominique and Dell’Orletta, Felice and Venturi, Giulia},
booktitle={Proceedings of the Seventh Italian Conference on Computational Linguistics (CLiC-it)},
year={2020}
}