RSVP for Megh Marathe & Lindsay Blackwell Talks (4/3 @ 11:30 AM)
Lindsay Blackwell

Title: When Online Harassment is Perceived as Justified

Abstract:
Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing—behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that bystander intervention can reduce this perception. We discuss alternative approaches and designs for responding to harassment online.

Bio:
Lindsay Blackwell is a PhD candidate in the University of Michigan School of Information's Social Media Research Lab and a UX Researcher with PRO Unlimited at Facebook. Lindsay uses mixed social science methods (e.g., semi-structured interviews, surveys, and experiments) to investigate abusive behaviors in online communities, including online harassment and hate speech. Her research has been published in CHI, CSCW, ICWSM, and Social Media + Society. Prior to starting graduate school, Lindsay worked as a social media director, directing marketing strategies and creating campaigns for national clients including I Love New York.

Megh Marathe:

Title: Semi-Automated Coding for Qualitative Research: A User-Centered Inquiry and Initial Prototypes (Best Paper Award - top 1%)

Abstract:
Qualitative researchers perform an important and painstaking data annotation process known as coding. However, much of the process can be tedious and repetitive, becoming prohibitive for large datasets. Could coding be partially automated, and should it be? To answer this question, we interviewed researchers and observed them code interview transcripts. We found that across disciplines, researchers follow several coding practices well-suited to automation. Further, researchers desire automation after having developed a codebook and coded a subset of data, particularly in extending their coding to unseen data. Researchers also require any assistive tool to be transparent about its recommendations. Based on our findings, we built prototypes to partially automate coding using simple natural language processing techniques. Our top-performing system generates coding that matches human coders on inter-rater reliability measures. We discuss implications for interface and algorithm design, meta-issues around automating qualitative research, and suggestions for future work.

Bio:
Megh is a queer, feminist, nonbinary, and disabled PhD student at the University of Michigan School of Information. Drawing upon scholarship in disability studies, science and technology studies, and sociology of illness experience, Megh seeks to understand the temporal experience of seizures for people with epilepsy; particularly in relation to the imaginaries of time deployed in the design of diagnostic and health-tracking devices. His work on natural language processing and bureaucratic grievance redress has appeared in CHI, CICLing, ICTD, and DEV, receiving one best paper award. Megh holds an MS in Computer Science from the University of Toronto.

Name: *
Email: *
School/Department/Unit *
I will attend this MISC meeting: *
Submit
Never submit passwords through Google Forms.
This form was created inside of University of Michigan. - Terms of Service