DUB Seminar will be conducted using Zoom, via an invitation distributed to the DUB mailing list. Participants who are logged into Zoom using a UW account will be directly admitted, and participants who are not logged in to a UW account will be admitted using a Zoom waiting room.
My scholarship spans examinations of (1) (de)marginalizing social processes as mediated in sociotechnical systems such as social media platforms, (2) algorithms’ and designs’ roles in marginalization, and (3) the implications of algorithmic inferences and classifications of potentially marginalizing information. In this talk, I will provide a brief overview of (2) and then discuss some work in (3).
Taking pregnancy loss as a marginalized reproductive health experience, I first describe how pregnancy tracking and algorithmic encounters can cause harm to individuals experiencing pregnancy losses. I theorize these harms as symbolic annihilation through design and algorithmic symbolic annihilation – concepts I put forth as tools to interrogate how marginalization is embedded, enacted, and experienced in diverse sociotechnical contexts.
I then take emotional and psychological states as another information type that, if made visible in some contexts, can be marginalizing. I will explore the implications of technologies that claim to algorithmically recognize, detect, predict, and infer emotions, emotional states, and mental health status — broadly referred to as emotion recognition / artificial intelligence (AI). I will share my recent mixed methods work examining: 1) data subjects’ attitudes toward and conceptions of emotion recognition/AI; and 2) the landscape of emotion recognition/AI technologies in the workplace, providing a view into a future of work with emotion AI and its implications. I suggest that increasing the visibility of emotional states has the potential to create additional emotional labor for workers, can compromise worker privacy, and contributes to a larger pattern of blurring boundaries between expectations of the workplace and a worker’s autonomy. I argue that emotion AI is not just technical, it is sociotechnical, political, and enacts/shifts power – it can contribute to marginalization despite claimed benefits. I advocate that we (and regulators) need to shift how technological inventions are evaluated.
Note: This talk includes content about pregnancy loss in the first ~10 minutes and pointers to mental health conditions in the second part.
Dr. Nazanin Andalibi is an Assistant Professor at the University of Michigan School of Information. She is also affiliated with the Center for Social Media Responsibility, the Center for Ethics, Society, and Computing, and the Digital Studies Institute. Her research interests are in social computing and HCI. Specifically, she studies the interplay between marginality and technology. She examines how marginality is experienced, enacted, facilitated, or disrupted in and as mediated through sociotechnical systems.
Andalibi’s scholarship informs theory, design, activism, and policy for socio-technical futures that foreground marginalized individuals’ values and needs to support qualities such as wellbeing, privacy, ethics, and justice. Andalibi’s work is published in venues including CHI, CSCW, TOCHI, JMIR, and New Media and Society, and featured by media outlets such as CNN, Fast Company, and Huffington Post. Her publications have received Best Paper and Honorable Mention Awards at CHI and CSCW and her work is sponsored by the National Science Foundation and the Digital Studies Institute.