JUMAS addresses the need to build an infrastructure able to optimise the information workflow in order to facilitate later analysis. New models and techniques for representing and automatically extracting the embedded semantics derived from multiple data sources will be developed. The most important goal of the JUMAS system is to collect, enrich and share multimedia documents annotated with embedded semantic minimising manual transcription activity. JUMAS is tailored at managing situations in which multiple cameras and audio sources are used to record assemblies in which people debates and event sequences need to be semantically reconstructed for future consultations. The prototype of JUMAS will be tested interworking with legacy systems, but the system can be viewed as able to support business processes and problem-solving in a variety of domains.
Our pick of the week by @mgaido91: "AlignFormer: Modality Matching Can Achieve Better Zero-shot Instruction-Following Speech-LLM" by @RuchaoFan, Bo Ren, Yuxuan Hu, Rui Zhao, Shujie Liu, Jinyu Li (2024).
#NLProc #Speech #instructionfollowing #zeroshot #speechtech #speechllm
AI is transforming cultural heritage, but what have we learned?
Come and join the #AI4Culture movement at our Final Conference on March 10 in Hilversum to explore AI’s current & future impact on cultural heritage.
Details & Registration: https://pretix.eu/EFHA/AI4Culture/
@EU_HaDEA
BOUQuET💐: an OPEN INITIATIVE aimed at building an evaluation dataset for massively multilingual text-to-text MT.
Let’s make MT available for any written language!
We are inviting everyone to contribute: ➡️
More details at: https://arxiv.org/abs/2502.04314
I am happy to announce that I will speak about our recent work "How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?" at the SlatorCon in March 🎊
📃 Preprint available here: