Download
Abstract
EVALITA is a shared evaluation campaign designed to assess and compare Natural Language Processing (NLP) and Speech Technologies through tasks proposed by the Italian research community. It provides a common framework for addressing open linguistic challenges and real-world applications, with increasing attention to multilingual and multimodal settings. The 2026 edition included 10 tasks and attracted 54 participating groups from 13 countries, confirming the growing international interest in the initiative. The workshop presents the results of the evaluation campaign, highlighting the widespread adoption of Large Language Models and the organizers’ effort in designing challenging tasks aimed at meaningfully evaluating and stress-testing such models.
Citation
@inproceedings{evalita2026overview,
title={EVALITA 2026: Overview of the 9th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian},
author={Cutugno, Francesco and Miaschi, Alessio and Aprosio, Alessio Palmero and Rambelli, Giulia and Siciliani, Lucia and Stranisci, Marco Antonio},
booktitle={Proceedings of the Ninth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2026)},
publisher = {CEUR.org},
year = {2026},
month = {February},
address = {Bari, Italy}
}