JUMAS addresses the need to build an infrastructure able to optimise the information workflow in order to facilitate later analysis. New models and techniques for representing and automatically extracting the embedded semantics derived from multiple data sources will be developed. The most important goal of the JUMAS system is to collect, enrich and share multimedia documents annotated with embedded semantic minimising manual transcription activity. JUMAS is tailored at managing situations in which multiple cameras and audio sources are used to record assemblies in which people debates and event sequences need to be semantically reconstructed for future consultations. The prototype of JUMAS will be tested interworking with legacy systems, but the system can be viewed as able to support business processes and problem-solving in a variety of domains.
New benchmark evaluates π #AI detection tools across languages, π finding performance gaps π in low-resource languages and challenges β οΈ with distinguishing AI-translated and hybrid humanβAI text.
@jasonslucas1 @adaku_uchendu @penn_state @Visa
π Call for Participation: @iwslt Offline Speech Translation 2026
Break language barriers with new languages & real-world scenarios + a brand new source-language agnostic speech translation track π
π
Evaluation: Apr 1β15
π
#IWSLT2026 #SpeechAI
π Call for Participation: @iwslt Model Compression 2026
Make large multilingual foundation models small β‘ without losing power in ENβDE/ZH speech-to-text translation.
π
Evaluation: Apr 1β15
#IWSLT2026 #SpeechAI #Qwen2 #EfficientAI
π Call for Participation: @iwslt Subtitling 2026
Turn speech into ready-to-watch subtitles π¬ across TV, News & YouTube!
π
Evaluation: Apr 1β15
#IWSLT2026 #SpeechAI #MultimodalAI