The project ATLAS created the technological bridge between cognitive sciences and the most progressive information technologies. The project was patronized by the region of Piedmont and financed by the program of development of innovative services and it gave the possibility to deaf people to look and understand programs of mass media through automatic translation from the Italian written language into the sign language (LIS) which is visualized a virtual actor created by computer animation drawing means. Through these tools the project achieved the aim to give the possibility to deaf people to understand television programs, WEB pages and films reproduced on DVD by the virtual translator personated in time which can be visualized on different types of display from television screen to computer, from mobile phones to PDAs.
Weekly pick from the #MeetweenScientificWatch: “Video-SALMONN: Speech-enhanced audio-visual large language models” – Redefining video comprehension with speech-aware AV-LLMs and groundbreaking QA accuracy. 🎥🎤🤖
I’m glad to announce that our work “How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?” has been accepted at the Transactions of @aclanthology (TACL)! 🎉
The preprint is available here:
The new @iwslt shared task on instruction following speech models is out! Test sets will be available on the 1st of April and participants have to submit their models by April 15th. Check out the description for more info (or get in touch with us):
📢First Call for Papers 📢
The 22nd @iwslt event will be co-located with @aclmeeting
31 July-1 August 2025 –Vienna, Austria
Scientific submission due March 15, 2025
More details here:
@marcfede @esalesk @ELRAnews @shashwatup9k @MarineCarpuat @_janius_