The scientific and technological objectives of QALL-ME pursued three crucial directions: multilingual open domain QA, user-driven and context-aware QA, and learning technologies for QA. The specific research objectives of the project included state-of-art advancements in the complexity of the questions handled by the system(e.g. how questions); the development of a web-based architecture for cross-language QA (i.e. question in one language, answer in a different language); the realization of real-time QA systems for concrete applications; the integration of the temporal and spatial context both for question interpretation and for answer extraction; the development of a robust framework for applying minimally supervised machine learning algorithms to QA tasks; and the integration of mature technologies for automatic speech recognition within the open domain question answering framework.
The 22nd edition of IWSLT will be co-located with @aclmeeting in Vienna, Austria on 31 July-1 Aug 2025!
Stay tuned for the CFP and more info about our 2025 shared tasks! Join our google group for periodic updates.
In "Twists, Humps, and Pebbles: Multilingual Speech Recognition Models Exhibit Gender Performance Gaps," @BeatriceSavoldi, @DennisFucci, @dirk_hovy, and I show how speech recognition serves different gender groups differently and what to do about it.
Meet @sarapapi, @BeatriceSavoldi, and @negri_teo at EMNLP 2024 in Miami next week! 🌴
They will present two main conference papers about human-centered #MT and #genderbias, and #opensource #speech resources!
📍 Details here: https://mt.fbk.eu/our-postdocs-sara-papi-and-beatrice-savoldi-and-our-researcher-matteo-negri-at-emnlp-2024/
#NLProc #EMNLP2024
Weekly pick from the #MeetweenScientificWatch: "Vcoder: Versatile Vision Encoders for Multimodal LLMs" - A novel encoder boosts object perception in MLLMs, outperforming GPT-4V in visual reasoning! 🌆👀