Based on the philosophy of the Semantic Web, Ontotext exploits text processing and automatic reasoning technologies to extract knowledge from texts and organise it conceptually in an ontology. Unlike common search engines, the Ontotext Portal directly accesses the concepts and entities of the ontology and presents the user with structured information instead of mere portions of texts. For each entity, the Ontotext Portal offers four different views: Articles (lists all the documents in which it is mentioned), Citografo (shows how often it is mentioned), Opinions (shows how often opinions are expressed about it), and Record (provides extra information about it).
Emanuele Pianta Award for the Best Master’s Thesis in Computational Linguistics submitted at an Italian university and defended between August 1st 2024 and July 31st 2025
- Deadline: August 1st, 2025 (11:59 pm CEST)
- All details online: https://clic2025.unica.it/emanuele-pianta-award-for-the-best-masters-thesis/
Our pick of the week by @DennisFucci: "Speech Representation Analysis Based on Inter- and Intra-Model Similarities" by Yassine El Kheir, Ahmed Ali, and Shammur Absar Chowdhury (ICASSP Workshops 2024)
#speech #speechtech
Findings from https://ieeexplore.ieee.org/document/10669908 show that speech SSL models converge on similar embedding spaces, but via different routes. While overall representations align, individual neurons learn distinct localized concepts.
Interesting read! @fbk_mt
Cosa chiedono davvero gli italiani all’intelligenza artificiale?
FBK in collaborazione con RiTA lancia un’indagine aperta a tutte/i per capire usi reali, abitudini e bisogni.
Bastano 10 minuti per partecipare, scopri di più: https://magazine.fbk.eu/it/news/italiani-e-ia-cosa-chiediamo-veramente-allintelligenza-artificiale/
🚀 Last call for the Model Compression for Machine Translation task at #WMT2025 (co-located with #EMNLP2025)!
Test data out on June 19 ➡️ 2 weeks for evaluation!
Can you shrink an LLM and keep translation quality high?
👉 https://www2.statmt.org/wmt25/model-compression.html #NLP #ML #LLM #ModelCompression