The Design Research Lab is a network of people, organisations, and non-human agents engaged at the intersection of technologies, materials, and social practices. Our aim is to design socially and ecologically sustainable tools, spaces, and knowledge that support people’s participation in a digital society – based on common principles of inclusiveness and respect for the planet. This puts the basic democratic right to take part in the digital sphere into practice. We start our research from individual lifeworlds and the needs of minoritized groups, beyond consumer majorities.
We are an interdisciplinary team of designers, researchers, tech-enthusiasts and critical thinkers from Berlin University of the Arts, German Research Center for Artificial Intelligence (DFKI), Weizenbaum Institute for the Networked Society, as well as Einsteincenter Digital Future (ECDF).
What can the sighted learn from the blind? What can the hearing learn from the deaf? What can design learn from social disability? How can design research find solutions for social disability? How could those facts be relevant for an application in human-machine interfaces? And what future (ICT) services and products can be transferred from the insights we gain?
The design research project “Speechless” aims to explore communication structures with and amongst speech- and bodily disabled (e.g., deaf or blind) people, as well as patterns of perception, navigation and locomotion, in order to
a) help develop supporting communication for the “disabled,” and
b) enrich common ways of human communication, with a special focus on human–computer interaction (HCI), for example in terms of multimodal interfaces.
We have been working together with both people with and without “disabilities.” By consciously deactivating certain senses or bodily functions (e.g., with blindfolding), we aim to gain knowledge for designing multimodal interfaces. How to navigate without seeing? How to communicate without spoken or written language? What about secret communication in public? And so on…
The transfer of such concepts from augmentative and alternative communication, not least to HCI, can serve designers, engineers or educators in various ways. First of all it can help to overcome communication barriers (by a technique of mediation or translation). Moreover, it opens up new perspectives in learning (language, dialogue, cultural/behavioral differences, etc.). It furthermore widens the spectrum of human or artificial interaction with the help of interaction patterns, that are (if not intuitive) at least easy to learn and that have already been proven to function in certain contexts of human (e.g., deaf) communication systems.
For example sign language can be an input to human and human–machine interaction, since it contains transferable rules for syntax and semantics and therefore alternative possibilities to collect, impress and present thoughts. It also enables use of different organs and parts of the body.
The visual-spatial modality of sign language-based communication enables a simultaneous communication of information. In contrast to the sequentially ordered words in spoken language, we find five parameters in sign language that can be simultaneously combined: hand shape, hand position, hand movement, location and non-manual components are are all viewed by the receiver in parallel, as they are generated.
Our research project opens up a wide field of new concepts and problems to solve in the learning field about properties of alternative and augmentative communication. Here becomes apparent a high potential to gain knowledge about and systematic experience with alternative communication systems to open up perspectives for designing human–machine interfaces.
Based on such insights, our research project shall sensitize the design profession as well as related disciplines to realize the various perspectives for HCI and general human communication, as well as not least the potential for social inclusion (e.g., of the bodily impaired) by reducing social disability.