Tasks that ultimately require knowledge-based multimedia techniques (content-oriented search, assessment, abstracting, etc.) are still to a major extent carried out manually. PATExpert’s overall scientific goal is to change the paradigm currently followed for patent processing from textual (viewing patents as text blocks enriched by “canned” picture material, sequences of morpho-syntactic tokens, or collections of syntactic structures) to semantic (viewing patents as multimedia knowledge objects) processing. PATExpert developed a multimedia content representation formalism based on Semantic Web technologies for selected technology areas and investigate the retrieval, classification, multilingual generation of concise patent information, assessment and visualization of patent material encoded in this formalism, taking the information needs of all user types as defined in a user typology into account. PATExpert’s technological goal was to develop a showcase that demonstrates the viability of PATExpert’s approach to content representation for real applications. The composition and the competence of the Consortium ensured the achievement of these goals.
🙌🏼 Excited to share our work on Speech Foundation Model for data crowdsourcing at COLING 2025 🙌🏼
Our co-author Laurent Besacier (@laurent_besacie) at NAVER LABS Europe will be presenting -- don't miss it.
👉🏼 Details: https://mt.fbk.eu/1-paper-accepted-at-coling-2025
Exciting news: @iwslt is co-located with #ACL2025NLP again this year! 🎉
Interested in speech processing? Check out the new task on instruction following — any model can participate! 🚀
📅 Data release: April 1
⏳ Submission deadline: April 15
Don’t miss it! 💬 #NLP #SpeechTech
Weekly pick from the #MeetweenScientificWatch: “Video-SALMONN: Speech-enhanced audio-visual large language models” – Redefining video comprehension with speech-aware AV-LLMs and groundbreaking QA accuracy. 🎥🎤🤖
I’m glad to announce that our work “How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?” has been accepted at the Transactions of @aclanthology (TACL)! 🎉
The preprint is available here: