The Design Research Lab is a network of people, organisations, and non-human agents engaged at the intersection of technologies, materials, and social practices. Our aim is to design socially and ecologically sustainable tools, spaces, and knowledge that support people’s participation in a digital society – based on common principles of inclusiveness and respect for the planet. This puts the basic democratic right to take part in the digital sphere into practice. We start our research from individual lifeworlds and the needs of minoritized groups, beyond consumer majorities.
We are an interdisciplinary team of designers, researchers, tech-enthusiasts and critical thinkers from Berlin University of the Arts, German Research Center for Artificial Intelligence (DFKI), Weizenbaum Institute for the Networked Society, as well as Einsteincenter Digital Future (ECDF).
The research project “SignReality – Extended Reality for Sign Language translation” has as goal the development of an augmented reality (AR) model and application visualizing an animated interpreter for German sign language (DGS). The project is a co-operation of the departments DFKI-DRX and the Affective Computing Group of DFKI-COS and is part of the activities of the broader DFKI Sign Language team, which expands over 4 departments and has been running 2 EU-funded and 2 German-funded research projects.
The app developed in SignReality will allow deaf and hard-of-hearing users to have in the augmented or virtual space a personal interpreter able to translate speech and text. They will be able to position and to resize the interpreter according to the needs of the translation, based on the observation that sign language users E.g., it is better to have the interpreter next to the speaking person to enhance the translation’s content with the direct view of the speaking person. The application will be used as a research prototype to study novel methods of interaction and content delivery between deaf and hard-of-hearing users and the surrounding environments, aiming at reducing communication barriers with hearing people.
The project has a duration of 8 months and is funded as part of an FSTP call from the EU Project UTTER (Unified Transcription and Translation for Extended Reality; EU Horizon under Grant Agreement No 101070631) in cooperation with the Universities of Amsterdam and Edinburgh. UTTER aims to take online and hybrid interaction to the next level by employing Large Language Models, focusing on use cases such as videoconferencing (speech dialogue) and multilingual customer support (chat).