The Design Research Lab is a network of people, organisations, and non-human agents engaged at the intersection of technologies, materials, and social practices. Our aim is to design socially and ecologically sustainable tools, spaces, and knowledge that support people’s participation in a digital society – based on common principles of inclusiveness and respect for the planet. This puts the basic democratic right to take part in the digital sphere into practice. We start our research from individual lifeworlds and the needs of minoritized groups, beyond consumer majorities.
We are an interdisciplinary team of designers, researchers, tech-enthusiasts and critical thinkers from Berlin University of the Arts, German Research Center for Artificial Intelligence (DFKI), Weizenbaum Institute for the Networked Society, as well as Einsteincenter Digital Future (ECDF).
When did it become normal to see pixelated, blurred, or ‚emojied‘ portions of bodies? What are the boundaries of personal expression, and how is anatomical representation regulated online? By whom? Through what means? With what effects?
These are all questions that this one-week long seminar will attempt to explore through a hands-on approach. By moving from what I will call fig leafing, or the active subtraction of (allegedly) obscene representations of body parts and functions through concealment employing a vast collection of means and tools (e.g. filters, pixelations, blurring, emojis, covering-up, monikers and so forth), together, we will first analyse the usage of this practice in Western modernity –– a technique derived from the covering up with a fig leaf of artistic nudities historically reoccurring throughout specific times in the Christian tradition as by the Biblical myth of Adam and Eve and their fall from the Garden of Eden.
We will then dive into contemporary attempts at defining bodily obscenity by peeking into the Community Standards and the technological practices employed by Big Tech to monitor and police such forms of individual speech (computer vision and classification models, human moderation, and so forth). We will understand the power networks behind the design and enforcement of the technologies and norms gatekeeping social networks and their effects on marginalised (feminine) bodies and identities. Finally, we will create versions of fig leaves (digital, analogue or combinations of both) aimed at rethinking/ countering the normative way we read bodies and nudity.
Fig leafing filters will be made throughout the seminar using techniques and methods of the students‘ choice. They could be made in analogue ways or through AR (augmented reality) software such as Facebook’s Spark AR Studio or by mixing these. All the created fig leaves will be processed and classified through existing CV (computer vision) software and documented (video/photos and automated labelling). No previous knowledge is required, and after the initial overview and introduction to the subject, students will have time and space to freely explore strategies to counter and question normative reading modes of bodies and the subjectifying identities they construct. Note: The Kompaktkurs will be open to students from any field (graphic and product design, media, art, architecture, etc.). In particular, fashion design students, for instance, are welcomed to contribute with their material and conceptual interpretation of the practice through the creation of wearable designs.
Dates:
Introduction on 26 September
Compact Phase on 4-7 October 2022, 10:00-17:00
Lecturers:
Prof. Dr. Gesche Joost (Design Research Lab)
Corinna Canali (Design Research Lab)
Emilia Knabe (Design Research Lab)
Language:
English
Format: This course will take place in person in the Berlin Open Lab at Einsteinufer 43.
UdK Berlin + TU Berlin / 3 SWS / 3 CP / UdK master students can gain credits for this class in the framework of the module „Designmethoden“ or over the Studium Generale.
Required knowledge: No specific previous knowledge is required. It is advised to bring a laptop and a smartphone (especially for AR filters using Spark AR Studio1).
The maximum number of participants is 15.