The PF-STAR project intended to contribute to establish future activities in the field of multisensorial and multilingual communication (interface technologies) on firmer bases by providing technological baselines, comparative evaluations, and assessment of prospects of core technologies, which future research and development efforts can build from. To this end, the project addressed three crucial areas: technologies for speech-to-speech translation, the detection and expressions of emotional states, and core speech technologies for children. For each of them, promising technologies/approaches were selected, further developed and aligned towards common baselines. The results were assessed and evaluated with respect to both their performances and future prospects. To maximise the impact, the duration of the project was limited to 24 months, and the workplan was designed to delivered results in two stages: at mid-project term (month 14), and at the end of the project. This permitted to make relevant results available as soon as possible, and in particular on time for them to be used during the preparatory phase of the first call of FP6. The Lehrstuhl fΓΌr Informatik 6 was involved in the comparative evaluation and further development of speech translation technologies. The statistical approach was compared to an interlingua based approach. After the evaluation phase, the two approaches were further developed and aligned towards common baselines. PF-STAR was supported by the European Union.
ππΌ Excited to share our work on Speech Foundation Model for data crowdsourcing at COLING 2025 ππΌ
Our co-author Laurent Besacier (@laurent_besacie) at NAVER LABS Europe will be presenting -- don't miss it.
ππΌ Details: https://mt.fbk.eu/1-paper-accepted-at-coling-2025
Exciting news: @iwslt is co-located with #ACL2025NLP again this year! π
Interested in speech processing? Check out the new task on instruction following β any model can participate! π
π
Data release: April 1
β³ Submission deadline: April 15
Donβt miss it! π¬ #NLP #SpeechTech
Weekly pick from the #MeetweenScientificWatch: βVideo-SALMONN: Speech-enhanced audio-visual large language modelsβ β Redefining video comprehension with speech-aware AV-LLMs and groundbreaking QA accuracy. π₯π€π€
Iβm glad to announce that our work βHow "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?β has been accepted at the Transactions of @aclanthology (TACL)! π
The preprint is available here: