The scientific and technological objectives of QALL-ME pursued three crucial directions: multilingual open domain QA, user-driven and context-aware QA, and learning technologies for QA. The specific research objectives of the project included state-of-art advancements in the complexity of the questions handled by the system(e.g. how questions); the development of a web-based architecture for cross-language QA (i.e. question in one language, answer in a different language); the realization of real-time QA systems for concrete applications; the integration of the temporal and spatial context both for question interpretation and for answer extraction; the development of a robust framework for applying minimally supervised machine learning algorithms to QA tasks; and the integration of mature technologies for automatic speech recognition within the open domain question answering framework.
Our pick of the week by @mgaido91: "AlignFormer: Modality Matching Can Achieve Better Zero-shot Instruction-Following Speech-LLM" by @RuchaoFan, Bo Ren, Yuxuan Hu, Rui Zhao, Shujie Liu, Jinyu Li (2024).
#NLProc #Speech #instructionfollowing #zeroshot #speechtech #speechllm
AI is transforming cultural heritage, but what have we learned?
Come and join the #AI4Culture movement at our Final Conference on March 10 in Hilversum to explore AI’s current & future impact on cultural heritage.
Details & Registration: https://pretix.eu/EFHA/AI4Culture/
@EU_HaDEA
BOUQuET💐: an OPEN INITIATIVE aimed at building an evaluation dataset for massively multilingual text-to-text MT.
Let’s make MT available for any written language!
We are inviting everyone to contribute: ➡️
More details at: https://arxiv.org/abs/2502.04314
I am happy to announce that I will speak about our recent work "How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?" at the SlatorCon in March 🎊
📃 Preprint available here: