The Design Research Lab is a network of people, organisations, and non-human agents engaged at the intersection of technologies, materials, and social practices. Our aim is to design socially and ecologically sustainable tools, spaces, and knowledge that support people’s participation in a digital society – based on common principles of inclusiveness and respect for the planet. This puts the basic democratic right to take part in the digital sphere into practice. We start our research from individual lifeworlds and the needs of minoritized groups, beyond consumer majorities.
We are an interdisciplinary team of designers, researchers, tech-enthusiasts and critical thinkers from Berlin University of the Arts, German Research Center for Artificial Intelligence (DFKI), Weizenbaum Institute for the Networked Society, as well as Einsteincenter Digital Future (ECDF).

Prof. Dr. Michelle Christensen, Prof. Dr. Florian Conradi & Ines Weigand
Dates: 06.-10.10.2025, 10:00-16:00
Room: Design Research Lab, Universität der Künste Berlin, Einsteinufer 43, 10587 Berlin
Registration: Please register by email to
As artificial intelligence finds its way into the mundanity of everyday life, constantly scanning and categorising us to provide the highest level of contended comfort and effortless efficiency, the systems that surround us are increasingly mediating our knowledge, actions, and behaviours. Search engine queries, autocomplete, autocorrect, personalised aides and chatbots mediate our every uncertainty – all neatly presented and packaged to relieve our regular routines. But who gets to make the narratives of technologies like AI, and who becomes produced as the technical outcasts of its limited learnings? Who and what gets misread and overheard and suffers the consequences of its immense analysis? Currently, we can witness the hegemony of binary heteronormative gender conceptions and western values expanding to all territories of the globe – colonising the internet with perceptions, practices and probabilities that include some, and exclude others. These deeds of design culminate in constituting codes of conduct that ultimately manufacture im/possibilities of perceiving both histories and presents.
In this one-week block-seminar we will take an applied and interdisciplinary approach to exploring forms of bias in the design of AI. We will locate real-world examples on the topic, engage personally with generative systems, and inspired by approaches from queer and feminist theory and technology – we will prototype forms of ‘hacking back’.
The outcomes of the block seminar will be exhibited in the context of the project Intersectional Bias in AI: Composing Cyborgs – Performing Critique, a collaboration between the UdK Berlin and the University of Oxford.
Literature:
– Buolamwini, J. & Gebru, T. (2018): Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, in: Proceedings of Machine Learning Research 81:1–15
– Keyes, O. (2018): The Misgendering Machines. Trans/HCI Implications of Automatic Gender Recognition, in: Proceedings of the ACM on Human- Computer Interaction, Vol. 2
– Haraway, D. (1985): A Manifesto for Cyborgs. Science, Technology, and Socialist Feminism in the 1980s, in: Socialist Review 1985, 5 (2), pp. 65–107
– Klein, L & D’Ignazio, C. (2020): Data Feminism. Cambridge, MA: The MIT Press
– Russell, L. (2020): Glitch Feminism – A Manifesto. London and New York: Verso Books