Designing Toxic Content Classification for a Diversity of Perspectives Designing Toxic Content Classification for a Diversity of Perspectives
  1. publications
  2. cybersecurity

Designing Toxic Content Classification for a Diversity of Perspectives

Available Media

Publication (Pdf)

Conference SOUPS
Authors Deepak Kumar , Patrick Gage Kelley , Sunny Consolvo ,
Citation

Bibtex Citation

@inproceedings{NANDESIGNING,title = {Designing Toxic Content Classification for a Diversity of Perspectives},author = {"Deepak Kumar" and "Patrick Gage Kelley" and "Sunny Consolvo" and "Joshua Mason" and "Elie Bursztein" and "Zakir Durumeric" and "Kurt Thomas" and "Michael Bailey"},booktitle = {SOUPS},year = {2021},organization = {Usenix}}

In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.

Recent

newsletter signup slide

Get cutting edge research directly in your inbox.

newsletter signup slide

Get cutting edge research directly in your inbox.