The Machine Translation group at Fondazione Bruno Kessler specializes in speech and text technologies for multilingual communication. We work on innovative neural architectures and multimodal resources for a range of topics and applications (e.g. subtitling, interpreting, gender inclusivity). We are part of the Digital Industry center.
If you are fascinated by languages, cutting edge technology and research challenges, check out our Join Us page!
1 Paper Accepted at ACL 2025 Main and 2 Papers at IWSLT 2025
Jun 5, 2025
1 Paper Accepted at ACL 2025 Main and 2 Papers at IWSLT 2025
-
3 Papers Accepted at INTERSPEECH 2025
May 27, 2025
-
-
1 PAPER ACCEPTED AT COLING 2025
Jan 14, 2025
-
-
Our pick of the week by @DennisFucci: "Speech Representation Analysis Based on Inter- and Intra-Model Similarities" by Yassine El Kheir, Ahmed Ali, and Shammur Absar Chowdhury (ICASSP Workshops 2024)
#speech #speechtech
Findings from https://ieeexplore.ieee.org/document/10669908 show that speech SSL models converge on similar embedding spaces, but via different routes. While overall representations align, individual neurons learn distinct localized concepts.
Interesting read! @fbk_mt
Cosa chiedono davvero gli italiani all’intelligenza artificiale?
FBK in collaborazione con RiTA lancia un’indagine aperta a tutte/i per capire usi reali, abitudini e bisogni.
Bastano 10 minuti per partecipare, scopri di più: https://magazine.fbk.eu/it/news/italiani-e-ia-cosa-chiediamo-veramente-allintelligenza-artificiale/
🚀 Last call for the Model Compression for Machine Translation task at #WMT2025 (co-located with #EMNLP2025)!
Test data out on June 19 ➡️ 2 weeks for evaluation!
Can you shrink an LLM and keep translation quality high?
👉 https://www2.statmt.org/wmt25/model-compression.html #NLP #ML #LLM #ModelCompression
Our pick of the week by @beomseok_lee_: "ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs" by Pooneh Mousavi, @yingzhi_wang, @mirco_ravanelli, and @CemSubakan (2025)
#SLU #speech #multimodal #LLM
Speech-language models show promise in multimodal tasks—but how well are speech & text actually aligned? 🤔
This paper https://arxiv.org/abs/2505.19937 proposes a new metric to measure layer-wise correlation between the two, with a focus on SLU tasks. 🔍🗣️📄