The TC-STAR project is envisaged as a long-term effort to advance research in all core technologies for Speech-to-Speech Translation (SST). SST technology is a combination of Automatic Speech Recognition (ASR), Spoken Language Translation (SLT) and Text to Speech (TTS) (speech synthesis). The objectives of the project are ambitious: making a breakthrough in SST that significantly reduces the gap between human and machine translation performance. The project targets a selection of unconstrained conversational speech domainsโspeeches and broadcast newsโand three languages: European English, European Spanish, and Mandarin Chinese. Accurate translation of unrestricted speech is well beyond the capability of todayโs state-of-the-art research systems. Therefore, advances are needed to improve the state-of the-art technologies for speech recognition and speech translation.
New benchmark evaluates ๐ #AI detection tools across languages, ๐ finding performance gaps ๐ in low-resource languages and challenges โ ๏ธ with distinguishing AI-translated and hybrid humanโAI text.
@jasonslucas1 @adaku_uchendu @penn_state @Visa
๐ Call for Participation: @iwslt Offline Speech Translation 2026
Break language barriers with new languages & real-world scenarios + a brand new source-language agnostic speech translation track ๐
๐
Evaluation: Apr 1โ15
๐
#IWSLT2026 #SpeechAI
๐ Call for Participation: @iwslt Model Compression 2026
Make large multilingual foundation models small โก without losing power in ENโDE/ZH speech-to-text translation.
๐
Evaluation: Apr 1โ15
#IWSLT2026 #SpeechAI #Qwen2 #EfficientAI
๐ Call for Participation: @iwslt Subtitling 2026
Turn speech into ready-to-watch subtitles ๐ฌ across TV, News & YouTube!
๐
Evaluation: Apr 1โ15
#IWSLT2026 #SpeechAI #MultimodalAI