Perhaps the defining attribute of social media is that anyone anywhere can say anything. This is often wonderful. Then again: anyone anywhere can say anything. In this talk, I will concentrate on two lines of inquiry: 1) What if people say things that aren’t true?; 2) What if people say things their government doesn’t like? First, I will discuss the development and analysis of a large-scale, systematic credibility corpus, called CREDBANK. With CREDBANK’s 66M tweets nested in 1,377 real-world events, we have found temporal and linguistic regularities differentiating credible and non-credible information on Twitter. Second, I will discuss a prototype linguistic algorithm we built to circumvent censorship on Chinese social media. Taking advantage of Mandarin’s natural homophones, we transformed previously-censored posts to stay on Sina Weibo three times longer and create millions of false positives for censors–while remaining human-interpretable. Finally, I will close the talk with a preview of a new line of work emerging from a different question: What if people say horrible things to each other? Here, we are working on machine learning-based interventions to help moderate online spaces.
Eric Gilbert is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He joined the Tech faculty in 2011 after a Ph.D. in CS at Illinois. Dr. Gilbert leads the comp.social lab–a research group that focuses on building and studying social media. His work has been supported by grants from Facebook, Samsung, Yahoo!, Google, Yik Yak, NSF, ARL, and DARPA. He is the recipient of an NSF CAREER award and the Georgia Tech Sigma Xi Young Faculty Award. Recently, Dr. Gilbert served as program co-chair of ICWSM 2016. In addition to founding several widely-used, experimental social computing systems, he has received four best paper awards and six nominations from ACM’s SIGCHI. Dr. Gilbert’s work has also appeared in the New York Times, Wired and on NPR.