MateCat worked at a new frontier of Computer Assisted Translation (CAT) technology, that is, how to effectively and ergonomically integrate Machine Translation (MT) within the human translation workflow. At a time in which MT was mainly trained with the objective of creating the most comprehensible output, MateCat targeted the development of MT technology aimed at minimizing the translator’s post-edit effort. To this end, MateCat developed an enhanced web-based CAT tool offering new MT capabilities, such as automatic adaption to the translated content, online learning from user corrections, and automatic quality estimation.
Our pick of the week by @DennisFucci: "Speech Representation Analysis Based on Inter- and Intra-Model Similarities" by Yassine El Kheir, Ahmed Ali, and Shammur Absar Chowdhury (ICASSP Workshops 2024)
#speech #speechtech
Findings from https://ieeexplore.ieee.org/document/10669908 show that speech SSL models converge on similar embedding spaces, but via different routes. While overall representations align, individual neurons learn distinct localized concepts.
Interesting read! @fbk_mt
Cosa chiedono davvero gli italiani all’intelligenza artificiale?
FBK in collaborazione con RiTA lancia un’indagine aperta a tutte/i per capire usi reali, abitudini e bisogni.
Bastano 10 minuti per partecipare, scopri di più: https://magazine.fbk.eu/it/news/italiani-e-ia-cosa-chiediamo-veramente-allintelligenza-artificiale/
🚀 Last call for the Model Compression for Machine Translation task at #WMT2025 (co-located with #EMNLP2025)!
Test data out on June 19 ➡️ 2 weeks for evaluation!
Can you shrink an LLM and keep translation quality high?
👉 https://www2.statmt.org/wmt25/model-compression.html #NLP #ML #LLM #ModelCompression
Our pick of the week by @beomseok_lee_: "ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs" by Pooneh Mousavi, @yingzhi_wang, @mirco_ravanelli, and @CemSubakan (2025)
#SLU #speech #multimodal #LLM
Speech-language models show promise in multimodal tasks—but how well are speech & text actually aligned? 🤔
This paper https://arxiv.org/abs/2505.19937 proposes a new metric to measure layer-wise correlation between the two, with a focus on SLU tasks. 🔍🗣️📄