MateCat worked at a new frontier of Computer Assisted Translation (CAT) technology, that is, how to effectively and ergonomically integrate Machine Translation (MT) within the human translation workflow. At a time in which MT was mainly trained with the objective of creating the most comprehensible output, MateCat targeted the development of MT technology aimed at minimizing the translator’s post-edit effort. To this end, MateCat developed an enhanced web-based CAT tool offering new MT capabilities, such as automatic adaption to the translated content, online learning from user corrections, and automatic quality estimation.
Our pick of the week by @FBKZhihangXie: "Adversarial Speech-Text Pre-Training for Speech Translation" by Chenxuan Liu, Liping Chen, Weitai Zhang, Xiaoxi Li, Peiwang Tang, Mingjia Yu, Sreyan Ghosh, and Zhongyi Ye (ICASSP 2025)
#speech #speechprocessing #speechtech #translation
🚀 AdvST: Adversarial training aligns speech and text distributions without parallel data! Combines adversarial learning + hidden-state swapping to fix length mismatch & boost low-resource speech translation. https://ieeexplore.ieee.org/document/10888294
A special evening in Rome to talk about Physical AI and Europe’s role in shaping this new frontier.
Partners from across Europe came together to present the DVPS project, and connect with key people from public institutions, embassies, industries, national & international media.
Thrilled to be part of this amazing project and team!
🚀 DVPS has launched at Translated's HQ!
70 researchers from 20 institutions across 9 countries unite to build next-gen multimodal foundation models that learn from real-world interaction.
A new European AI journey begins.
#DVPS #PhysicalAI #HorizonEurope #MultimodalAI
Our pick of the week by @FBKZhihangXie: "PHRASED: Phrase Dictionary Biasing for Speech Translation" by Peidong Wang, Jian Xue, Rui Zhao, @ChenJunkun, Aswin Shanmugam Subramanian, and Jinyu Li (2025).
#Speech #SpeechAI #Translation #ST #SpeechTranslation
🚀 Boost rare-phrase translation in speech! Uses **bilingual dictionaries** to dynamically bias outputs.
✅ **+21%** recall in streaming ST
✅ **+85%** in multimodal LLMs
🔗: http://arxiv.org/abs/2506.09175