The PF-STAR project intended to contribute to establish future activities in the field of multisensorial and multilingual communication (interface technologies) on firmer bases by providing technological baselines, comparative evaluations, and assessment of prospects of core technologies, which future research and development efforts can build from. To this end, the project addressed three crucial areas: technologies for speech-to-speech translation, the detection and expressions of emotional states, and core speech technologies for children. For each of them, promising technologies/approaches were selected, further developed and aligned towards common baselines. The results were assessed and evaluated with respect to both their performances and future prospects. To maximise the impact, the duration of the project was limited to 24 months, and the workplan was designed to delivered results in two stages: at mid-project term (month 14), and at the end of the project. This permitted to make relevant results available as soon as possible, and in particular on time for them to be used during the preparatory phase of the first call of FP6. The Lehrstuhl für Informatik 6 was involved in the comparative evaluation and further development of speech translation technologies. The statistical approach was compared to an interlingua based approach. After the evaluation phase, the two approaches were further developed and aligned towards common baselines. PF-STAR was supported by the European Union.
Our pick of the week by @FBKZhihangXie: "Adversarial Speech-Text Pre-Training for Speech Translation" by Chenxuan Liu, Liping Chen, Weitai Zhang, Xiaoxi Li, Peiwang Tang, Mingjia Yu, Sreyan Ghosh, and Zhongyi Ye (ICASSP 2025)
#speech #speechprocessing #speechtech #translation
🚀 AdvST: Adversarial training aligns speech and text distributions without parallel data! Combines adversarial learning + hidden-state swapping to fix length mismatch & boost low-resource speech translation. https://ieeexplore.ieee.org/document/10888294
A special evening in Rome to talk about Physical AI and Europe’s role in shaping this new frontier.
Partners from across Europe came together to present the DVPS project, and connect with key people from public institutions, embassies, industries, national & international media.
Thrilled to be part of this amazing project and team!
🚀 DVPS has launched at Translated's HQ!
70 researchers from 20 institutions across 9 countries unite to build next-gen multimodal foundation models that learn from real-world interaction.
A new European AI journey begins.
#DVPS #PhysicalAI #HorizonEurope #MultimodalAI
Our pick of the week by @FBKZhihangXie: "PHRASED: Phrase Dictionary Biasing for Speech Translation" by Peidong Wang, Jian Xue, Rui Zhao, @ChenJunkun, Aswin Shanmugam Subramanian, and Jinyu Li (2025).
#Speech #SpeechAI #Translation #ST #SpeechTranslation
🚀 Boost rare-phrase translation in speech! Uses **bilingual dictionaries** to dynamically bias outputs.
✅ **+21%** recall in streaming ST
✅ **+85%** in multimodal LLMs
🔗: http://arxiv.org/abs/2506.09175