Our very own @sarapapi presenting FAMA at #clicit2025:
๐Paper: https://clic2025.unica.it/wp-content/uploads/2025/09/80_main_long.pdf
๐ Models: https://hf.co/collections/FBK-MT/fama-683425df3fb2b3171e0cdc9e
๐ Data: https://hf.co/datasets/FBK-MT/fama-data
๐ป Code: https://github.com/hlt-mt/FBK-fairseq
Joint work with the @fbk_stek group @FBK_research
๐ Excited to present FAMA, the first large-scale #OpenScience #Speech foundation model for ๐ฎ๐น Italian & ๐ฌ๐ง English, at #clicit2025 (17:30โ18:45 oral session)!
๐ Models: https://hf.co/collections/FBK-MT/fama-683425df3fb2b3171e0cdc9e
๐ Data: https://hf.co/datasets/FBK-MT/fama-data
๐ป Code: https://github.com/hlt-mt/FBK-fairseq
#SpeechTech
Our pick of the week by @sarapapi: "Retrieval-Augmented Generation for AI-Generated Content: A Survey" by Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, Bin Cui.
#RAG #survey
An interesting survey about #RAG and its interplay with #multimodality: Retrieval-Augmented Generation for AI-Generated Content: A Survey
https://arxiv.org/pdf/2402.19473
@fbk_mt
Our pick of the week by @mgaido91: "Context-Driven Dynamic #Pruning for Large #Speech #Foundation Models" by Masao Someki, Shikhar Bharadwaj, Atharva Anand Joshi, Chyi-Jiunn Lin, Jinchuan Tian, Jee-weon Jung, @shinjiw_at_cmu, et al. (#INTERSPEECH2025).
as we are organizing the second edition of the IWSLT model compression task, happy to see new works on pruning large speech model based on external context (speaker, acoustic events, language)
https://arxiv.org/pdf/2505.18860
@fbk_mt