Do artificially intelligent systems reflect a certain gender, race or class? What and whose politics is currently being consolidated in algorithmic culture?

As AI finds its way into the mundanity of everyday life, constantly scanning and categorising us to provide the highest level of comfort and efficiency, the systems that surround us are increasingly mediating our bodies, actions and behaviours. Search engine queries, autocomplete functions, auto-image tagging, personal aides, home-pods, smart wearables – all nicely displayed and packaged to ease our regular routines. Currently, however, we can witness a great deal of concealed racial and gender bias in the design of these systems and objects – be it through ‘personal assistants’ with female names and voices or soap dispensers that only works on white hands.

In this one-week seminar we will investigate forms of bias in the design of AI. We will locate real-world examples on the topic, engage personally with the systems, and inspired by approaches from queer and feminist theory and technology – we will prototype forms of hacking back.

Please notice – in the framework of this course, there will be an event on Thursday the 10th of October from 18:00-20:00.

07.10.-11.10.2019 I 10:00 -16:00 I Berlin Open Lab I Einsteinufer 43

Language: German / English
Participants: Max. 15
Please register by July 31st at: f.conradi@udk-berlin.de and m.christensen@udk-berlin.de

Prof. Dr. Gesche Joost
Dr. des. Florian Conradi
Dr. des. Michelle Christensen
Marie Dietze