The project ATLAS created the technological bridge between cognitive sciences and the most progressive information technologies. The project was patronized by the region of Piedmont and financed by the program of development of innovative services and it gave the possibility to deaf people to look and understand programs of mass media through automatic translation from the Italian written language into the sign language (LIS) which is visualized a virtual actor created by computer animation drawing means. Through these tools the project achieved the aim to give the possibility to deaf people to understand television programs, WEB pages and films reproduced on DVD by the virtual translator personated in time which can be visualized on different types of display from television screen to computer, from mobile phones to PDAs.
Our pick of the week by @mgaido91: "AlignFormer: Modality Matching Can Achieve Better Zero-shot Instruction-Following Speech-LLM" by @RuchaoFan, Bo Ren, Yuxuan Hu, Rui Zhao, Shujie Liu, Jinyu Li (2024).
#NLProc #Speech #instructionfollowing #zeroshot #speechtech #speechllm
AI is transforming cultural heritage, but what have we learned?
Come and join the #AI4Culture movement at our Final Conference on March 10 in Hilversum to explore AI’s current & future impact on cultural heritage.
Details & Registration: https://pretix.eu/EFHA/AI4Culture/
@EU_HaDEA
BOUQuET💐: an OPEN INITIATIVE aimed at building an evaluation dataset for massively multilingual text-to-text MT.
Let’s make MT available for any written language!
We are inviting everyone to contribute: ➡️
More details at: https://arxiv.org/abs/2502.04314
I am happy to announce that I will speak about our recent work "How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?" at the SlatorCon in March 🎊
📃 Preprint available here: