The scientific and technological objectives of QALL-ME pursued three crucial directions: multilingual open domain QA, user-driven and context-aware QA, and learning technologies for QA. The specific research objectives of the project included state-of-art advancements in the complexity of the questions handled by the system(e.g. how questions); the development of a web-based architecture for cross-language QA (i.e. question in one language, answer in a different language); the realization of real-time QA systems for concrete applications; the integration of the temporal and spatial context both for question interpretation and for answer extraction; the development of a robust framework for applying minimally supervised machine learning algorithms to QA tasks; and the integration of mature technologies for automatic speech recognition within the open domain question answering framework.
Our pick of the week by @FBKZhihangXie: "PHRASED: Phrase Dictionary Biasing for Speech Translation" by Peidong Wang, Jian Xue, Rui Zhao, @ChenJunkun, Aswin Shanmugam Subramanian, and Jinyu Li (2025).
#Speech #SpeechAI #Translation #ST #SpeechTranslation
🚀 Boost rare-phrase translation in speech! Uses **bilingual dictionaries** to dynamically bias outputs.
✅ **+21%** recall in streaming ST
✅ **+85%** in multimodal LLMs
🔗: http://arxiv.org/abs/2506.09175
FAMA è il primo foundation model vocale open-science per ita e eng, sviluppato da FBK. Riconosce e traduce la voce usando solo dati e strumenti pubblici: oltre 150.000 ore di audio open, codice e processi completamente accessibili.
@fbk_stek @fbk_mt
https://magazine.fbk.eu/it/news/la-prima-famiglia-di-modelli-open-science-per-il-riconoscimento-vocale-e-la-traduzione-del-parlato/
Emanuele Pianta Award for the Best Master’s Thesis in Computational Linguistics submitted at an Italian university and defended between August 1st 2024 and July 31st 2025
- Deadline: August 1st, 2025 (11:59 pm CEST)
- All details online: https://clic2025.unica.it/emanuele-pianta-award-for-the-best-masters-thesis/
Our pick of the week by @DennisFucci: "Speech Representation Analysis Based on Inter- and Intra-Model Similarities" by Yassine El Kheir, Ahmed Ali, and Shammur Absar Chowdhury (ICASSP Workshops 2024)
#speech #speechtech
Findings from https://ieeexplore.ieee.org/document/10669908 show that speech SSL models converge on similar embedding spaces, but via different routes. While overall representations align, individual neurons learn distinct localized concepts.
Interesting read! @fbk_mt