Our paper ‘Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation’ (with Felice Dell’Orletta) has been accepted at RepL4NLP-2020! In this paper we performed an in-depth linguistic analysis aimed at understanding the implicit knowledge encoded in a contextual and a contextual-independent Neural Language Model (NLM). In particular: we evaluated the best method for obtaining sentence-level representations out of single-word embeddings; we compared the results obtained by the two NLMs according to the different combining methods; we studied whether the contextualized model is able to encode sentence-level properties within its single word representations.