Social media sites like Facebook, Twitter and Reddit make millions of decisions everyday about which posts can stay online and which posts are removed. How these moderation decisions are made has important consequences for many of the key problems that the Internet faces today - fake news, online harassment, online radicalization, and censorship. My research builds a foundation for designing transparent and effective content moderation systems. I compliment data science methods with qualitative methods to (1) incorporate fairness and transparency in content removals and (2) examine the effectiveness of a range of moderation strategies in combating online hate groups. My findings show that (1) offering explanations for post removals improves both user attitudes and user behaviors and (2) implementing design frictions that impede access to controversial communities makes it more difficult for hate groups to recruit new members. In this talk, I will discuss how I conducted this research and articulate the lessons learned from this work for the benefit of site managers, moderators, and designers of moderation systems.
Shagun Jhaver is a Postdoctoral Scholar in the Allen School of Computer Science & Engineering at the University of Washington. He is joining the School of Communication and Information at Rutgers University as an Assistant Professor in Fall 2021. Shagun’s research examines the governance mechanisms of internet platforms to understand how their design, technical affordances, and policies affect public discourse. He has worked with social media sites like Reddit and Twitch, and his research has impacted their efforts to address societal challenges such as online harassment and the rise of hate groups. His work has received two Best Paper Awards (at CSCW and ICWSM), one Best Paper Honorable Mention Award (at CSCW) and been featured in Editor’s Spotlight in TOCHI. His research has also received attention in the popular press, including The Washington Post, Forbes, New Scientist, and MIT Technology Review.