The scientific and technological objectives of QALL-ME pursued three crucial directions: multilingual open domain QA, user-driven and context-aware QA, and learning technologies for QA. The specific research objectives of the project included state-of-art advancements in the complexity of the questions handled by the system(e.g. how questions); the development of a web-based architecture for cross-language QA (i.e. question in one language, answer in a different language); the realization of real-time QA systems for concrete applications; the integration of the temporal and spatial context both for question interpretation and for answer extraction; the development of a robust framework for applying minimally supervised machine learning algorithms to QA tasks; and the integration of mature technologies for automatic speech recognition within the open domain question answering framework.
🌍 @lina_conti and @luisabentivogli are heading to #LREC2026 in Palma! They'll present two papers:
📄 "Voice, Bias, and Coreference: An Interpretability Study of Gender in Speech Translation"
Paper link:
🤔 What Matters in Data for DPO? I asked myself this question a few days ago while trying to understand how to generate a dataset with preferences to run #DPO. This recent #NeurIPS paper answered some of my questions. The findings are simple but crucial for data creation:
🎓 Come and join our group! 🎓
We offer 2 fully funded PhD positions:
🌍 Human-Centred Evaluation Frameworks for Multilingual Technologies (A6)
🤖 Multimedia Personalization with Multimodal Large Language Models (A7)
⏰ Deadline: 15 May 2026
🔗 Details: https://iecs.unitn.it/education/admission/call-for-application
Our pick of the week by
@FBKZhihangXie
: "Detecting Hallucination in SpeechLLMs at Inference Time Using Attention Maps" by @JWaldendorf, Bashar Awwad Shiekh Hasan and Evgenii Tsymbalov
📰
#SpeechLLM #Hallucination
🚀 New paper: Detecting Hallucinations in SpeechLLMs at Inference Time Using Attention Maps
📄 http://arxiv.org/abs/2604.19565
🧩 Lightweight inference-time detection for SpeechLLM hallucinations via audio attention.
✨ Attention classifiers beat uncertainty baselines on ASR and S2TT.